Commit Graph

78 Commits

Author SHA1 Message Date
Adam Simpkins
da18313de3 [caffe2] expose whether FBGEMM is available to the Python code (#54274)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54274

Some of the Python tests need to be aware of whether or not FBGEMM is
available, so expose this setting in the pybind extension.
ghstack-source-id: 124317732

Test Plan: Will use this variable in the tests on D26658205.

Reviewed By: mraway

Differential Revision: D27171780

fbshipit-source-id: 4c94144a959bf8bf0e1553b6e029e94a91794e29
2021-03-19 12:52:14 -07:00
Adam Simpkins
81b9aa743b [pytorch] Update caffe2/python to eliminate Pyre errors (#52083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52083

This makes minor fixes in `caffe2/python` to address all errors currently
reported by Pyre.

I update the code to fix errors when doing so looked simple and safe,
and added `pyre-fixme` comments in other places.
ghstack-source-id: 121109695

Test Plan: Confirmed that Pyre no longer reports errors under `caffe2/python`

Differential Revision: D26272279

fbshipit-source-id: b1eb19d323b613f23280ce9c71e800e874ca1162
2021-02-11 11:04:59 -08:00
Bugra Akyildiz
eec201c138 Add last_n_window_collector
Summary:
Add `last_n_window_collector` as C2 supports and PyTorch currently does not have this operator: https://www.internalfb.com/intern/diffusion/FBS/browsefile/master/fbcode/caffe2/caffe2/operators/last_n_window_collector.cc?lines=139

## Problem that we are solving

This operator works on multiple pieces of data and collects last `n` element that has been seen.

If you have the following pieces of data that has been passed around:

```
  [1, 2, 3, 4]
  [5, 6, 7]
  [8, 9, 10, 11]
```

for 3 times and the number of collector is given to be 6. The expected result is:

```
  [6, 7, 8, 9, 10, 11]
```
What this means is that, almost like we need a FIFO(First in First Out) mechanism where as we are passing this data through the collector, we will be pushing some other data at the end.

In this particular example, in the first pass(the data is `[1, 2, 3, 4]`) , we hold `[1, 2, 3, 4]` in the queue as our queue size is 6.

In the second pass(the data is `[5, 6, 7]`), we hold `[2, 3, 4, 5, 6, 7]` in the queue and since 1 is inserted the last, it will drop due to the size limitation of the queue.

In the third pass(the data is `[8, 9, 10, 11]`), we hold `[6, 7, 8, 9, 10, 11]` in the queue and `2,3,4,5` are dropped due the the size of the queue.

For multidimension case, when we have the following data:

```
  [[1, 2], [2, 3], [3, 4], [4, 5]]
  [[5, 6], [6, 7], [7, 8]]
  [[8, 9], [9, 10], [10, 11], [11, 12]]
```

and our queue size is 6.

In the first pass, we will have `  [[1, 2], [2, 3], [3, 4], [4, 5]]`
In the second pass, we will have `[2, 3], [3, 4], [4, 5]] [[5, 6], [6, 7], [7, 8]]`
In the third pass, we will have `[6, 7], [7, 8]] [[8, 9], [9, 10], [10, 11], [11, 12]]`

### The implementation

I am using FIFO queue in Python which is in the collections library. This accepts `maxlen` argument which can be used to set the size of the queue.

I am using last n indices of the tensor through list indices and in this operator, I am not doing copy.

In the test plan, I have both single dimension tensors as well as multi-dimension tensors.

### Benchmark
I used various different configurations and added a benchmark test. PyTorch implementation is much master than Caffe2 implementation:

#### CPU Benchmark
```
torch_response.median
0.00019254473969340324

caffe_response.median
0.00030233583599794657
```

#### GPU Benchmark

```
torch_response.mean
0.000081007429903838786

caffe_response.mean
0.00010279081099724863
```

Test Plan:
### For CPU:
```
buck test //caffe2/torch/fb/sparsenn:test
```

### For GPU:
- Used an on-demand machine and did the following commands:
```
jf get D24435544
buck test mode/opt  //caffe2/torch/fb/sparsenn:test
```
https://www.internalfb.com/intern/testinfra/testconsole/testrun/4222124688138052/

Reviewed By: dzhulgakov, radkris-git

Differential Revision: D24435544

fbshipit-source-id: 8193b4746b20f2a4920fd4d41271341045cdcee1
2020-10-30 02:35:54 -07:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Yinghai Lu
8850fd1952 Add python inferface to create OfflineTensor (#42516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42516

att. We need it for some scripts.

Reviewed By: houseroad

Differential Revision: D22918112

fbshipit-source-id: 8a1696ceeeda67a34114bc57cb52c925711cfb4c
2020-08-04 01:31:34 -07:00
Colin L Reliability Rice
dfa914a90c Modify lazy_dyndep loading to trigger inside workspace. (#41687)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41687

Specifically, this makes a new library (lazy), which can be used from both core
and workspace.

This allows workspace.Createnet to trigger lazy loading of dyndep dependencies.

Test Plan: Added a unit test specifically for workspace.CreateNet

Reviewed By: dzhulgakov

Differential Revision: D22441877

fbshipit-source-id: 3a9d1af9962585d08ea2566c9c85bec7377d39f2
2020-07-22 15:36:43 -07:00
lcskrishna
302cf6835e [ROCm][Caffe2] Enable MIOpen 3D Pooling (#38260)
Summary:
This PR contains the following updates:
1. MIOpen 3D pooling enabled in Caffe2.
2. Refactored the MIOpen pooling code in caffe2.
3. Enabled unit test cases for 3D pooling.

CC: ezyang jeffdaily ashishfarmer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38260

Differential Revision: D21524754

Pulled By: xw285cornell

fbshipit-source-id: ddfe09dc585cd61e42eee22eff8348d326fd0c3b
2020-07-08 17:42:55 -07:00
Brian Wignall
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
Yinghai Lu
f0dd7517f2 Add option to clean up allocated activations between c2 runs (#29619)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29619

att

Reviewed By: houseroad

Differential Revision: D18415190

fbshipit-source-id: 739aaf436578fac635df10de42b35e2b4368df37
2019-11-13 10:30:10 -08:00
Junjie Bai
9b147961c4 Fix get_gpu_memory_info in non-cuda builds (#21054)
Summary:
#21024 broke master
cc akyrola
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21054

Reviewed By: akyrola

Differential Revision: D15533406

Pulled By: bddppq

fbshipit-source-id: 0dcfa0ce865e109b46280ef1786dbc7a8af30739
2019-05-28 23:05:15 -07:00
Aapo Kyrola
f1fe4b1114 add simple memory analyzer and log warning if GPU underutilized (#21024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21024

Add a new pybinded call to CUDAGetMemoryInfo.

Reviewed By: wesolwsk

Differential Revision: D15520607

fbshipit-source-id: f6d04e48f7d7cb089fc52fa8835cfee3f452d2f1
2019-05-28 19:58:54 -07:00
Ansha Yu
a9aaf698a4 add c2 benchmark runs in cpp (#20108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20108

Add cpp runs for c2, hooked up via pybinds. Print output to terminal. This is not hooked up with the pep output yet because I'd like to verify the numbers first.

Note that this isn't quite the same mechanism as the pytorch cpp hookup, which uses cpp_python_extensions. If I can use the same mechanism to pull all the inputs for c2 through cpp and do FeedBlobs in cpp, then I'll switch to that.

Reviewed By: zheng-xq

Differential Revision: D15155976

fbshipit-source-id: 708079dacd3e19aacfe43d70c5e5bc54da2cf9e3
2019-05-13 17:01:08 -07:00
Duc Ngo
e7b2669151 caffe2 - Expose tensor filler util to Python (#18886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18886

Expose tensor filler util to Python and add a unit test (both C++/Python)

Reviewed By: salexspb

Differential Revision: D14784470

fbshipit-source-id: bb8e013d1755c27c166e87d5a8491a97c65d3d8d
2019-04-08 11:54:10 -07:00
Mingzhe Li
5f5a2aaab9 Operator-level performance microbenchmarks (#18740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18740

Test utilities for writing Caffe2/PyTorch performance microbenchmarks. Brief description of the file structure

* benchmark_core.py : core utiltiites for running microbenchmark tests
* benchmark_caffe2.py : Caffe2 specific benchmark utilitites
* benchmark_pytorch.py: PyTorch specific benchmark utilities
* benchmark_runner.py : Main function. Currently it can run the microbenchmark tests in a stand-alone mode. The next step is to have this integrate with AI-PEP.

The utilities are located at https://github.com/pytorch/pytorch/tree/master/test to have access to both Caffe2/PyTorch Python's frontend.

Include two operator microbenchmarks; support both Caffe2/PyTorch:
* MatMul
* Add

Reference: PyTorch benchmarks : https://github.com/pytorch/benchmark/tree/master/timing/python. In this work, we start with two example binary operators MatMul and Add, but eventually we should to cover unary operators like in the PyTorch benchmark repo.

Reviewed By: zheng-xq

Differential Revision: D13887111

fbshipit-source-id: b7a56b95448c9ec3e674b0de0ffb96af4439bfce
2019-04-02 17:06:19 -07:00
Dmytro Dzhulgakov
dec116e96f PyTorch/Caffe2 tensor interop in Python (#17190)
Summary:
Because of two separate python extensions with different pybind
instances I have to go through void* conversion. Since it's hidden from
user, it's fine.

New APIs added on C2 side:
- workspace.FetchTorch('blob')
- workspace.Workspace.current.blobs['blob'].to_torch()
- workspace.FeedBlob('blob', pytorch_tensor)

Works on CPU an GPU.

The only glitches are with resizing because of variable/tensor split.
But data sharing works properly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17190

Reviewed By: ezyang

Differential Revision: D14163882

Pulled By: dzhulgakov

fbshipit-source-id: d18e5b8fcae026f393c842a1149e972515732de2
2019-03-04 11:34:01 -08:00
rohithkrn
aa88c2c0b6 Unify gpu_support variable in python tests (#16748)
Summary:
Assign `has_gpu_support = has_cuda_support or has_hip_support` and make according changes in python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16748

Differential Revision: D13983132

Pulled By: bddppq

fbshipit-source-id: ca496fd8c6ae3549b736bebd3ace7fa20a6dad7f
2019-02-07 00:29:51 -08:00
rohithkrn
0d663cec30 Unify cuda and hip device types in Caffe2 python front end (#14221)
Summary:
Goal of this PR is to unify cuda and hip device types in caffe2 python front end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14221

Differential Revision: D13148564

Pulled By: bddppq

fbshipit-source-id: ef9bd2c7d238200165f217097ac5727e686d887b
2018-11-29 14:00:16 -08:00
Hong Li
d03c6ba50d Adding Fetching Real number representation
Summary: Adding Fetching Real number representation for int8 tensor in workpace.py

Reviewed By: harouwu

Differential Revision: D12936556

fbshipit-source-id: f8756a37bce21c93d44d52faf5da9c9bd6473f4a
2018-11-05 23:35:24 -08:00
Lu Fang
30aaa07594 New serialization format (#12384)
Summary:
Addressed Dima's feedback.

The proposal is here: https://fb.quip.com/TbQmAuqIznCf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12384

Reviewed By: dzhulgakov

Differential Revision: D10246743

Pulled By: houseroad

fbshipit-source-id: c80db0c35d60ca32965275da705f2b1dfb2a7265
2018-10-16 16:36:58 -07:00
Yangqing Jia
083e037dea minor fix (#12688)
Summary:
This seems to be a typo that never got caught - no actual functionality changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12688

Differential Revision: D10391704

Pulled By: Yangqing

fbshipit-source-id: ce633776957628c4881956c5423bfab78294d512
2018-10-15 17:25:49 -07:00
Dmytro Dzhulgakov
1d3f650ce4 Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format
Differential Revision:
D10098106

Original commit changeset: 94ec7fc57c84

fbshipit-source-id: 38f729b0970618f38359797b806cbbcd865f4715
2018-10-02 00:43:40 -07:00
Lu Fang
35becd1879 New version of PT1 model format (#12149)
Summary:
Considered four different existing formats: 1) static graph, 2) torch script, 3) pickle files, 4) PyTorch C++ serialize APIs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12149

Reviewed By: BIT-silence

Differential Revision: D10098106

Pulled By: houseroad

fbshipit-source-id: 94ec7fc57c842e50fae5286ddeda657a4967a07a
2018-10-01 15:57:02 -07:00
Lu Fang
32494c226e OperatorDef <==> NodeProto Conversion (#11621)
Summary:
Operator level proto conversion between (new) torch proto and (old) caffe2 proto.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11621

Reviewed By: BIT-silence

Differential Revision: D9892422

Pulled By: houseroad

fbshipit-source-id: 01a55ec0a09479876a27082d90fc970723f4d431
2018-09-19 08:41:33 -07:00
Shihao Xu
72a84127b1 Add Workspace methods ws.feed_blob(name, arr) ws.remove_blob(name) (#10929)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10929

Workspace classes methods were missing on the Python side.

Being able to write the New Checkpoint Framework with more control of the workspace and cleaner implementation.

Added

- ws.feed_blob(name, arr)

- ws.remove_blob(name)

Reviewed By: mraway

Differential Revision: D9486867

fbshipit-source-id: ea02d2e3a39d716a5a3da0482f57d4ac4c893763
2018-08-28 17:54:34 -07:00
Kittipat Virochsiri
2b134c72e6 Add interface to provide blob types to shape&type inference (#9643)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9643

Current map interface assumes float data type, which is not always correct.

Reviewed By: kennyhorror

Differential Revision: D8455784

fbshipit-source-id: b94a31267760f7f97c15aa4b03008affc347fd10
2018-07-24 11:58:05 -07:00
Kittipat Virochsiri
01581037dc Add workspace.RunPlanInBackground (#9637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9637

Adding a method to run plan in background. The intended use is to run BlueWhale's data reading & preprocessing net in background while the GPU is training.

Reviewed By: MisterTea

Differential Revision: D8906439

fbshipit-source-id: b1c73ca7327e2d87a8f873924e05ab3d161a3f1e
2018-07-20 14:56:12 -07:00
Peter Yeh
54db14e390 HIP Operators Generator--> HipOpG (#9322)
Summary:
The goal of this PR is to add an infrastructure; to convert(hipify) CUDA ops into [HIP](https://github.com/ROCm-Developer-Tools/HIP) ops , at **compile** time.

Note that HIP ops, which are portable c++ code, can run on AMD and NVIDIA platform.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9322

Differential Revision: D8884707

Pulled By: bddppq

fbshipit-source-id: dabc6319546002c308c10528238e6684f7aef0f8
2018-07-19 00:26:06 -07:00
Lin Li
0fe980c748 Memory usage measurement -- Caffe2 (#9017)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9017

Closes https://github.com/pytorch/pytorch/pull/9017

Added "get_blob_size_bytes" to "pybind_state.cc" in Caffe2 to expose the size of blob in bytes.

Reviewed By: kuttas

Differential Revision: D8685696

fbshipit-source-id: 9a9d38f207c8c59ef534217181e8ce1514617628
2018-07-17 16:40:23 -07:00
bddppq
f94ae3ba1d
Update from facebook (#7696)
* Fix handling of empty batches in SumReduceDimsOp

As titled

* Deferrable async_scheduling finishRun fix

Proper order of finishing run operations in deferrable_async_scheduling net

* Simplify exception handling in async_scheduling

Simplify exception handling, no need to busy wait, thread that processes the
last task can finish the run

* [C2]worker_coordinator_memorize_worker_ids

As titled. This is related to T28689868, where the number of blobs we want to create is equal to the number of worker ids

* Add unit test for nets with no type set

* Ignore total length argument in sympolic_pad_packed_sequence

1- There was a mistake in the code that total_length was added to the wrong symbolic function (pack_padded_sequence) instead of (pad_packed_sequence)
2- No need to throw an exception if total_length is given since it is only used to enable data_parallel training on multi-gpus and doesn't have anything to do with onnx export, so just ignore it. https://fburl.com/tk4gciqp

* Add support for MKLDNN to async_scheduling

Just add MKLDNN as a possible CPU option to async_scheduling's pool function

* [AuFL][ensemble] support branch output for prediction

This diff supports using predictions from different branches and thus enables model ensembling (not fully independent).

* Fix a bug in add_loss in layer_model_helper

As titled.

* Support lradaption for adam

1.lr adaption operator
2.apply to dense adam

* Perf tweaks for async_scheduling

Restore single pool option + remove unnecessary (no-ops) calls

* add quantization to SparseSimdAdagradOp

add a bunch of quantization signatures to SparseSimdAdagradOp, implementations to come next

* [sr] [codemod] Change all SR callsites to use new API

@allow-large-files

This diff refactors all callsites of SR to use the slightly changed API introduced in the diff below. Really what this means is that you need to include the correct header. Also if you were using `ClientFactory::newFactory` you need to not prefix it with `ClientFactory::`.

```
cd ~/fbsource/fbcode
find ./ -type f -exec sed -i -e 's:#include "servicerouter/client/cpp2/ClientFactory.h":#include "servicerouter/client/cpp2/ServiceRouter.h":' -e 's:#include <servicerouter/client/cpp2/ClientFactory.h>:#include <servicerouter/client/cpp2/ServiceRouter.h>:' -e 's/ClientFactory::newFactory(/newFactory(/g' {} \;
```

Also manually fixed spots that couldn't be done automatically (or broke because they depended on transitive includes).

* Back out "Fix handling of empty batches in SumReduceDimsOp"

Original commit changeset: 282da1730cc2 This commit is blocking the
Github->fbcode sync, which really needs to get merged ASAP. D7881937 which this
diff depends on will be reverted in the sync D7990948 which causes this to
break. The sync diff cannot be patched with this reversion because it must be
landed against base revision 5c8c099 , and D7881937 must not be included in the
sync diff because it is breaking GPU tests that are not available in sandcastle
: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda8.0-cudnn6-ubuntu16.04-test/3638/console
for one example.

* Add the flow to support operator benchmark

1) generate model with the operator 2) upload to everstore 3) generate model spec into json file 4) start running the benchmark

* [tum][gpu] Connect DPM trainer with flow and unit tests

This diff:
- Fix some small bugs for Yiming's recent changes to parallelizer, so it suits real use cases.
- Add correct tags to the TUM code, so we can do data parallel transform
- pass extra info when instantiation.
- add unit test for using DPM in TUM model

After this diff, we can do simple box, multi-gpu fully-sync trainer for TUM in Fblearner workflow, but may still need to do speed benchmarking.

* w/o normalized lradaption for adam dense only

The previous lr adaption includes a normalization step when performing the dot product operation. This is not exactly same as what is proposed in the paper. I add normalization as an option. Without it, the operator performs exactly what the paper proposed. With the option, we add the normalization step

* [fb] Use SharedPromise in DeferrableAsyncSchedulingNet

This code is to simplify DeferrableAsyncSchedulingNet by removing condition
variable + small fixes

* [tum] implement cuda sparseLengthsMean and LengthsMean

as title

* Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.

Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.

* Move feature_to_index to FeatureSpec.feature_to_index

move feature_to_index to FeatureSpec.feature_to_index to avoid override other fields

* [Caffe2] Rename bytes_moved to bytes_written

Just a rename in preparation for supporting bytes_read.

* [c2] fix ReduceFrontSumOp for empty case by setting 0

otherwise, it may use the results from last iteration when it's empty batch.

* [Caffe2] [Int8] Improve Intel CPU performance

* [Easy] Improve PrependDim op logging

as titled

* DBFileReader expand db_path using os.path.expanduser(..)

Since there are a lot of possible use cases of `DBFileReader` to read from user home path, like `~/local/sample.db`, I want to save people's trouble of calling `os.path.expanduser(db_path)` themselves.

* [Caffe2] Add bytes_read to cost structure

We're adding analytical read bytes to cost functions.  This extends the structure accordingly for all CostInference defined operators.
Additionally, some small bug fixes were performed:
1) Cost functions now extract type information of operands instead of assuming float

* Fix sleef on aarch64 for hhvm

@bypass-lint

Rename flag

* Remove duplicated part in caffe2/ideep/operators/conv_op.cc

should be sync error

* Rename test helper function test_adagrad_sparse_helper to adagrad_sparse_test_helper to avoid confusing pytest
2018-05-19 23:10:48 -07:00
kuttas
460e8cd376 change print to logger.warning in operator traceback code (#6216) 2018-04-03 08:01:25 -07:00
Yiming Wu
03c5198331 [C2 Int8][C2 Core]fetch int8 blob
Providing Python API to fetch Int8 tensors.

  data, scale. zero_point = workspace.FetchInt8Blob(blob_name)

now returns a tuple if the blob contains a Int8TensorCPU

     'data' = int8 data array
     'scale' = fake quantization scale
     'zero_point' = fake quantization offset

Although FetchBlob shares back-end implmentation with FetchInt8Blob, we raise
error to prevent unexpected behavior of the same method
2018-03-30 21:00:44 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Kutta Srinivasan
0a18608b43 hacks to test exception handling and python operator backtraces
Add exception handling & re-throwing to worker threads of DAGNetBase
2018-03-07 15:09:17 -08:00
Dmytro Dzhulgakov
9e71de398b [core] Graph-level NUMA awareness in Caffe2
Adding NUMA awareness through numa_node_id in DeviceOption. Blobs of operators
with numa_node_id are allocated on corr. memory banks, using CPU pools with
NUMA affinity set to run operators.
2018-03-06 00:33:11 -08:00
Yangqing Jia
ced2c7e2b2 Remove Set/GetDefaultGPUID and move to use current gpu id instead.
Summary:
Reason for this change:

(1) Setting/Getting default gpu id doesn't seem to be used at all.
(2) It actually is confusing compared to the CUDA_VISIBLE_DEVICES options etc.
(3) When setting cuda_gpu_id=-1 in the CUDAContext arg, it used to use the
default gpu id but probably we should use the current gpu - so that the caller
will be able to control the device placement.

One use case is for TensorRT - if we have a custom callback layer, then it would
be easier for TRT or whatever caller to set the running device.

Reviewed By: dzhulgakov

Differential Revision: D6740357

fbshipit-source-id: 2ea710e434b10220d5a198e31c93847304636863
2018-01-19 18:03:21 -08:00
Lu Fang
f779f44c89 Add ONNX exporter for glcgan
Summary: Export PyTorch glcgan model to Caffe2 using ONNX

Reviewed By: dzhulgakov

Differential Revision: D6298765

fbshipit-source-id: 324e52249bb88c6e7bb3b682a4ec0662b6a0c1ea
2017-11-14 10:09:44 -08:00
Ilia Cherniavskii
1149b9bbb5 Polling async net executor
Summary:
Implementation of polling async net executor.
Notes:
- New net executor async_polling - schedules CPU and GPU ops asynchronously, uses single polling thread
- Events: update to Caffe2 events to support async CPU events, adding new methods:
 Query() - non-blocking checking of event states: INITIALIZED -> RECORDED -> SUCCESS/FAILED
 ErrorMessage() - when operation runs asynchronously and fails calling this on event will give error message
- Tasks: using existing DAGNet's algorithm to compute CPU and GPU chains, a separate task for each chain
- Polling: using single thread to query state of events - for CPU tasks atomically queries task state, for GPU task - uses cudaEventQuery; using Event
- Scheduling of CPU ops: using global thread pools
- Scheduling of GPU ops: using GPU thread pool per GPU device

Reviewed By: dzhulgakov

Differential Revision: D5985110

fbshipit-source-id: a9de7fcbb71d046a3aa1b573072b89a65dfeee8c
2017-11-03 07:27:44 -07:00
Bram Wasti
a0aa6d0e24 expose flop annotation to python
Summary: expose the flop annotation framework to python functions

Reviewed By: Maratyszcza, Yangqing

Differential Revision: D6135705

fbshipit-source-id: 2eed80b6cbda7b3ee3fe0e019a0f1fc4b0aa320b
2017-10-24 11:35:24 -07:00
Dmytro Dzhulgakov
2972a6ca02 Revert D6026557: [caffe2][PR] Fix "No handlers could be found for logger"
Summary:
This reverts commit 95c634872ac02be721257169e38c8fead04cd66b

bypass-lint

Differential Revision: D6026557

fbshipit-source-id: 663c28583ce3b01070ff5449115ed7e222f71776
2017-10-12 20:21:52 -07:00
Luke Yeager
75bece6ede Fix "No handlers could be found for logger"
Summary: Closes https://github.com/caffe2/caffe2/pull/1316

Differential Revision: D6026557

Pulled By: Yangqing

fbshipit-source-id: 95c634872ac02be721257169e38c8fead04cd66b
2017-10-10 22:32:13 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Luke Yeager
ebeaecbfa3 workspace_gpu: Get{CUDAVersion,DeviceProperties}
Summary:
Expose some useful utilities to Python
Closes https://github.com/caffe2/caffe2/pull/1216

Differential Revision: D5843888

Pulled By: akyrola

fbshipit-source-id: fc731781aec3c7cc6a4b7132f1624423d015abff
2017-09-17 20:01:34 -07:00
Ben Zhang
cfbd116966 ApplyTransformIfFaster
Summary:
Implemented ApplyTransformIfFaster

Determine if a transform is faster, then return whichever net is better.

Reviewed By: bwasti

Differential Revision: D5534535

fbshipit-source-id: 509943205b0c454bf30fb01343ac4e88d1441c39
2017-08-17 15:36:51 -07:00
Ben Zhang
6314c1fc15 Transforms in Python
Summary: Allow the use of apply_transform() in the python API

Reviewed By: bwasti

Differential Revision: D5530483

fbshipit-source-id: 61a6d36fe125c89629fdeea040a717c453d84417
2017-08-01 16:51:38 -07:00
Szymon Piechowicz
54b171eae5 Caffe2: don't swallow exception stacktrace
Summary:
Caffe2: don't swallow exception stacktrace

{F69325406}

Reviewed By: akyrola

Differential Revision: D5503227

fbshipit-source-id: 4e11d921652a094e20c46af19ba880390be8e997
2017-07-26 15:48:05 -07:00
Thomas Dudziak
5355634dac Dict fixes/improvements and unittest targets for Python 3 in caffe2 core
Summary: As title

Reviewed By: salexspb

Differential Revision: D5316104

fbshipit-source-id: aee43819d817842e5ce6ba3d045a55b1a2491c30
2017-06-29 17:05:41 -07:00
Alexander Sidorov
c8410859d9 Operator python stacktraces, attempt 2
Summary:
Last time I used uuid filled into OperatorDef. And operator_tracebacks was populated using traceback.extract_stack. There were several issues with this approach:

1. A random field in OperatorDef breaks workflows relying on memoization, i.e. when computation is skipped based on already computed result before.
2. Adding one more field revealed RNNs being non forward compatible wrt to new fields in there. prototxt format seems to not allow forward compatibility (thanks jamesr66a for the investigation!). For RNNs we need to swtich them to a more resilient approach. azzolini's proposed change to OperatorDef / NetDef would allow that by just nesting NetDef dirrectly inside OperatorDef without need for extra serialization.
3. traceback.extract_stack is very slow when executable is on a remote filesystem. It does one or more os.stat for each frame on the stack. For some cases it ended up being up to 15 extra minutes on model construction.

In this diff I use a different approach which should fix all those problems above.

1.2. are solved by not adding a new field at all. Instead I report operator idx wrt to a net it runs in. Thanks akyrola and dzhulgakov for the idea. Downside here is that operator list manipulation breaks the logic and separately created ops are not covered at all.
3. I solved this by operating on raw frames without using traceback and inspect modules which end up doing a lot of file system calls. See function extract_stacktace in core.py with additional comments.

Reviewed By: dzhulgakov

Differential Revision: D5286285

fbshipit-source-id: 626dd0f5f6b8b1d86bd6bf519078b122f43ddcaa
2017-06-25 19:32:58 -07:00
Alexander Sidorov
83e6a0bec8 Revert uuid change to OperatorDef protobuf
Summary:
a few issues:

1. Randomization hurts memoization
1. Even if we make it non random, then we can get key colisions when loading it back.
2. RNNs use prototxt for step net and apparently its not forward compatible like normal protobuf is

I am thinking of a better less invasive solution now.

Reviewed By: jamesr66a

Differential Revision: D5272118

fbshipit-source-id: ab577fad04fbfc632e1fceffa923377a0d3da1be
2017-06-19 16:47:31 -07:00
Alexander Sidorov
eebda50b79 Operator python traceback
Summary: This is going to show a python Caffe2 user where a failed operator was created. Motivation for having this information not right in protobuf is to avoid having it too verboose and keep ability to read protobufs of a net after a simple print() call.

Reviewed By: jamesr66a

Differential Revision: D5226047

fbshipit-source-id: 7edfe850e05a2ec209577142aa3368664a57a108
2017-06-13 18:50:02 -07:00
Yiming Wu
8871ef029b quick fix future issue with brew/core/schema/workspace/scope/utils.py
Summary:
fixing missing future package issue.

Recently we found some of our users does not have future module support. So we might need a try/catch wrapper around all past import

Reviewed By: Yangqing

Differential Revision: D5183547

fbshipit-source-id: 262fdf2940ee1be4454bf0b0abb9e6a0f1a0ee82
2017-06-05 12:01:48 -07:00