Commit Graph

33 Commits

Author SHA1 Message Date
Xiaomeng Yang
e04c9195b7 Update math::Transpose to support tensor with size > 2G (#17670)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17670

Update math::Transpose to support tensor with size > 2G

i-am-not-moving-c2-to-c10

Differential Revision: D14313624

fbshipit-source-id: 0b4a85b913972e5a8981f0d40d0c539407b98f30
2019-03-20 18:22:21 -07:00
Xiaomeng Yang
3a34f443c5 Separate reduce functions from math (#16929)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16929

Separate CPU reduce functions from math

i-am-not-moving-c2-to-c10

Reviewed By: houseroad

Differential Revision: D13999469

fbshipit-source-id: bd628b15a6e3c1f04cc62aefffb0110690e1c0d1
2019-02-13 17:50:47 -08:00
Xiaomeng Yang
866c4e3467 Separate Moments from math and optimize it (#16175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16175

Separate Moments from math and optimize it

i-am-not-moving-c2-to-c10

Reviewed By: houseroad

Differential Revision: D13742472

fbshipit-source-id: 90757d908d38c98ca69818855aaf68315e525992
2019-01-20 08:53:25 -08:00
Jerry Zhang
890568a018 Tensor reinitialization codemod - 5/5 (#15884)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15884

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: hyuen

Differential Revision: D13586737

fbshipit-source-id: dc8e49e9f29505b8898bb19f84c1a983f2d811ab
2019-01-10 16:32:26 -08:00
Yinghai Lu
7c053b7e64 Add filler for SparseLengthsWeightedSum (#13949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13949

This diff adds support to fillers for `SparseLengthsWeight*` ops. It does 3 things:
1. Add the fillers for `SparseLengthsWeight*` ops
2. Add filling heuristics to consider the path of `LengthsRangeFill` -> `Gather` -> `SparseLengthsWeightedSum`, where the length input is shared by `LengthsRangeFill` and `SparseLengthsWeightedSum`. Therefore, we need to carefully bound the value of that length input so that at `Gather`, it does not index out-of-bound for the weight input of `Gather`.
3. Fix and simplify the logic of `math::RandFixedSum`, where we just keep rejecting the generated value if it violates the invariants.

Reviewed By: highker

Differential Revision: D13048216

fbshipit-source-id: bfe402e07e6421b28548047d18b298c148e0ec87
2018-11-16 11:31:05 -08:00
Jerry Zhang
91e87c0395 Renaming size() to numel() - 2/2
Summary:
Codemod generated with clangr shard mode, 50 files per diff,
clangr code(size->numel): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

i-am-not-moving-c2-to-c10

Reviewed By: ezyang

Differential Revision: D12833748

fbshipit-source-id: 98dc2d3abc23c177c2c9e457b81499952d4b690c
2018-10-29 18:59:29 -07:00
Christian Puhrsch
a6630e25af Remove many caffe2::TIndex and replace them with int64_t (#11943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11943

See title

Reviewed By: ezyang

Differential Revision: D9992645

fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
2018-09-22 18:11:04 -07:00
Roy Li
a9459bf7b5 Replace float16 with at::Half in caffe2 (#11676)
Summary:
- Finishes unifying Half type in pytorch and caffe2
- As a side effect, aten_op works for fp16 now
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11676

Reviewed By: weiyangfb

Differential Revision: D9829019

Pulled By: li-roy

fbshipit-source-id: b8c9663873c10fe64c90ef180dc81af2e866674e
2018-09-20 18:55:17 -07:00
Edward Yang
91797c0672 Replace direct include of caffe2.pb.h with an intermediary header caffe2_pb.h (#10946)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10946

```
codemod -d . --extensions cc,cpp,cu,cuh,h caffe2/proto/caffe2.pb.h caffe2/proto/caffe2_pb.h
```

Reviewed By: houseroad

Differential Revision: D9539945

fbshipit-source-id: 497d04720e8e7e61c05ffe1b23733d0cb774de7e
2018-08-28 11:57:08 -07:00
Xiaomeng Yang
f57e4ce1d5 Update broadcast with alpha to reduce num of launching kernels. (#10235)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10235

Update broadcast with alpha to reduce num of launching kernels.

Reviewed By: houseroad

Differential Revision: D9175824

fbshipit-source-id: 7a463833350a2c84dcfb82f73cf40da403dd59a0
2018-08-04 19:54:20 -07:00
Xiaomeng Yang
57d2d4bcff Optimize reduce ops for 2d and 3d (#9992)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9992

Optimize reduce ops for 2d and 3d

Reviewed By: houseroad

Differential Revision: D9042505

fbshipit-source-id: 62af2125aa6439106293e59bdf6a2b920792fd2d
2018-08-04 13:53:58 -07:00
Jerry Zhang
aebf3b47ae Remove template parameter from Tensor (#9939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9939

Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: ezyang, houseroad

Differential Revision: D9024330

fbshipit-source-id: e0b8295d2dc6ebe2963383ded5af799ad17164ba
2018-07-27 10:56:39 -07:00
Jerry Zhang
969b62f276 Revert D8121878: Remove template parameter from Tensor
Differential Revision:
D8121878

Original commit changeset: 4a5e9a677ba4

fbshipit-source-id: d8e2c0bb145b52fbcca323b22d1d3346f0b3249e
2018-07-26 14:02:04 -07:00
Jerry Zhang
cd5adc7b5f Remove template parameter from Tensor (#13)
Summary:
Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: xw285cornell

Differential Revision: D8121878

fbshipit-source-id: 4a5e9a677ba4ac82095df959851a054c81eccf81
2018-07-26 10:25:23 -07:00
Xiaomeng Yang
5df3eae89e Add 1x1 specialization for conv with NCHW order (#9671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9671

Add 1x1 specialization for conv with NCHW order

Reviewed By: houseroad

Differential Revision: D8944686

fbshipit-source-id: 94bf44f69498b1934b7dfff4c0e989342c7bb61c
2018-07-23 18:54:58 -07:00
Xiaomeng Yang
e27d66a454 Remove Eigen from math CUDA and update algorithm in ReduceTensor and Moments (#6922) 2018-04-24 23:07:35 -07:00
Xiaomeng Yang
34fa355f27
[caffe2] Add Moments to math (#6798)
* Add gpu check for reduce_max

* Add Moments in math

* Update cpu version to avoid int type to be 0

* Update Moments on CPU to same as GPU
2018-04-21 01:03:44 -07:00
Xiaomeng Yang
be0b7f8c81 Add reduce min and reduce max (#6685) 2018-04-18 10:58:05 -07:00
Xiaomeng Yang
4be34ca0f3 Add broadcast and reduce gradient (#6668)
Add broadcast and reduce gradient
2018-04-17 13:31:13 -07:00
Xiaomeng Yang
8849bea120 [caffe2] Update ReduceOps (#6497)
* Update ReduceMean

* Add reduce mean to math

* Update cuda flag

* Update Eigen::Tensor ctor

* Remove unused variables

* Skip ReduceTensorGPUTest if no gpus

* Add NOMINMAX for windows

* Fix lpnorm_op in windows
2018-04-11 23:36:05 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Xiaomeng Yang
278d398748 Add GPU version of math::Transpose
Summary: Add GPU version of math::Transpose

Reviewed By: Yangqing

Differential Revision: D6747958

fbshipit-source-id: 7047107609386c1ab53492381ca9bcf8bccd2924
2018-01-24 14:18:02 -08:00
Xiaomeng Yang
0a8a18ca01 Fix GemmBatched
Summary: Fix GemmBatched

Reviewed By: Yangqing

Differential Revision: D6678168

fbshipit-source-id: 132117633573600d4e31c1959a0ccbe34416e1f1
2018-01-10 18:16:52 -08:00
Xian Li
c1d9694f42 Backed out changeset 6f532bad5824
Summary: D6636282 caused regression test failure of nmt model use in prod, see 24949620 for besect history.

Reviewed By: pietern

Differential Revision: D6671602

fbshipit-source-id: d863013964666727cf488a6ac5b01f5216f149d9
2018-01-05 19:34:38 -08:00
Xiaomeng Yang
2cda295244 Adds cpu version of transpose util function in math.
Summary: Adds transpose CPU version to prepare for LC layer.

Reviewed By: Yangqing

Differential Revision: D6641358

fbshipit-source-id: 1825b4c270dea2c0049ba334303abcbf50b22ee7
2018-01-04 23:05:40 -08:00
Xiaomeng Yang
68726df0ac Fix GemmBatchedOp
Summary: Fix GemmBatchedOp to prepare for LC Layer.

Reviewed By: Yangqing

Differential Revision: D6636282

fbshipit-source-id: 6f532bad582442ebf3da843e973eb85405371c02
2018-01-03 21:16:18 -08:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Lukasz Wesolowski
a0b83464e4 fix bad conversion to float in cpu_half2float
Summary: When converting from half to float, the bytes to be returned were represented as an unsigned int. When returning, this had the effect of converting the unsigned int into a float. This is incorrect, as we want to instead take the raw data and return it as float.

Reviewed By: pietern, asaadaldien

Differential Revision: D5080335

fbshipit-source-id: 7208efc5799daccf92e1628ee326f7470b867261
2017-05-17 15:57:42 -07:00
Andrew Gallagher
9c58341809 codemod: use <> includes for gtest headers
Summary: These are system headers and so should be included via `<>`.

Reviewed By: yfeldblum

Differential Revision: D4783480

fbshipit-source-id: 979670b594859b45560cead34f615442dfcc9f8b
2017-03-28 00:50:54 -07:00
Yangqing Jia
6463eebc7b chunky sync - build scripts to be written 2016-07-21 10:16:42 -07:00
Yangqing Jia
98c5b86ef7 A few changes:
(1) cudnn for conv
(2) cublas: after going through the work I feel it's beter to use HOST pointer mode, so changed it.
(3) storage order: despite that googlenet and multibox uses NHWC, it seems better to be still using
    NCHW as default to be consistent with caffe and cudnn; moved to NCHW as default.
2015-10-21 22:37:11 -07:00
Yangqing Jia
648d1b101a A consolidation of a couple random weekend work.
(1) various bugfixes.
(2) Tensor is now a class independent from its data type. This allows us
    to write easier type-independent operators.
(3) code convention changes a bit: dtype -> T, Tensor<*Context> -> Tensor* alias.
(4) ParallelNet -> DAGNet to be more consistent with what it does.
(5) Caffe's own flags library instead of gflags.
(6) Caffe's own logging library instead of glog, but glog can be chosen with
    compile-time definition -DCAFFE2_USE_GOOGLE_GLOG. As a result, glog macros
    like CHECK, DCHECK now have prefix CAFFE_, and LOG(*) now becomes
    CAFFE_LOG_*.
(7) an optional protobuf inclusion, which can be chosen with USE_SYSTEM_PROTOBUF
    in build_env.py.
2015-10-11 23:14:06 -07:00
Yangqing Jia
2ed1077a83 A clean init for Caffe2, removing my earlier hacky
commits.
2015-06-25 16:26:01 -07:00