Xuehai Pan
b005ec62b9
[BE] Remove dependency on six and future ( #94709 )
...
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six ) and [future](https://pypi.org/project/future ) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet , https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
Yuyao Wang
0bf78b57c0
fix: max_unpool3d buffer overflow ( #94372 )
...
Fixes #88032
Previously `output_size` is accessed before the shape length check, which leads to a buffer overflow issue.
The fix is simply to prioritize the check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94372
Approved by: https://github.com/albanD
2023-02-08 19:48:25 +00:00
Khushi Agrawal
ccd8b66b0a
[testing] add ErrorInputs for adaptive_{avg, max}_poolnd ( #90924 )
...
Ref: https://github.com/pytorch/pytorch/pull/88906#discussion_r1040157313
Covers:
- [x] adaptive_avg_pool1d
- [x] adaptive_avg_pool2d
- [x] adaptive_avg_pool3d
- [x] adaptive_max_pool1d
- [x] adaptive_max_pool2d
- [x] adaptive_max_pool3d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90924
Approved by: https://github.com/mruberry
2023-01-12 05:24:01 +00:00
Khushi Agrawal
7cd900eb97
[fix] adaptive_{avg, max}_pool variants : cuda & cpu ( #88906 )
...
Fixes #78868
#### TODO
- [x] add tests
- [x] adaptive_avg_pool2d
- [x] adaptive_avg_pool3d
- [x] adaptive_max_pool2d
- [x] fix adaptive_max_pool3d_cuda
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88906
Approved by: https://github.com/mruberry
2022-12-13 20:57:00 +00:00
mingfeima
c6942dbbfb
add shape check for random_samples in fractional_max_pool{2d|3d} ( #89992 )
...
This PR add shape checks for `random_samples` in fractional_max_pool2d and fractional_max_pool3d.,
to provide more meaningful warnings instead of SegFault when the input is illegal.
For more details, please check https://github.com/pytorch/pytorch/issues/89648
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89992
Approved by: https://github.com/jgong5 , https://github.com/ezyang
2022-12-06 14:14:41 +00:00
Kshiteej K
ce856cee7e
[test_nn] fix missing class attributes for NNTestCase ( #89200 )
...
Missed setting these class variable 😓
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89200
Approved by: https://github.com/albanD
2022-11-22 22:55:44 +00:00
kshitij12345
8fb470e81a
[fix] max_pool1d: shape check ( #85594 )
...
Fixes #76587
Before PR:
```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32) # note requires_grad is False
max_pool(t) # Worked and returned tensor of shape [17, 0, 48].
```
After PR
```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32) # note requires_grad is False
max_pool(t) # Errors with `max_pool1d: Expected 2D or 3D (batch mode) tensor with optional 0 dim batch size for input, but got: [17, 0, 48]`
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85594
Approved by: https://github.com/mruberry
2022-09-29 15:40:09 +00:00
Muhammed Shuaibi
4382da5d5e
Remove assertEqualIgnoreType from test_pooling ( #85112 )
...
Fix TODOs related to https://github.com/pytorch/pytorch/issues/38095 in test_pooling.py.
This PR correctly casts the expected outputs to satisfy the asserts. If you'd prefer feeding `exact_dtype=False` as an argument instead I can update accordingly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85112
Approved by: https://github.com/kit1980
2022-09-16 22:04:42 +00:00
Xiao Wang
5a29db142e
Use int64_t index type in multiplications to avoid integer overflow in max_pool2d and avg_pool2d on CUDA ( #68682 )
...
Fix https://github.com/pytorch/pytorch/issues/68418
- [X] operator benchmark: https://github.com/xwang233/code-snippet/tree/master/pooling-bench-68682 , 10% or worse regression are seen in some shapes
- [X] end-to-end benchmark: no major regression seen in our test suites
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68682
Approved by: https://github.com/ngimel
2022-09-12 22:45:38 +00:00
Aidyn-A
372a19d2c6
Update start_index and end_index for adaptive pooling ( #84010 )
...
### Description
The PR fixes the issue #81409 . To fix the issue the procedure of determining start and end indices for adaptive max pooling and average pooling is modified towards integer-only arithmetic.
### Testing
The testing of the new functions is straightforward:
```
#include <iostream>
#include <cassert>
#include <cmath>
int64_t start_index(int64_t a, int64_t b, int64_t c) {
return (int64_t)std::floor((float)(a * c) / b);
}
int64_t end_index(int64_t a, int64_t b, int64_t c) {
return (int64_t)std::ceil((float)((a + 1) * c) / b);
}
int64_t start_index_new(int64_t a, int64_t b, int64_t c) {
return (a / b) * c + ((a % b) * c) / b;
}
int64_t end_index_new(int64_t a, int64_t b, int64_t c) {
return 1 + ((a + 1) * c - 1) / b;
}
int main() {
size_t N = 2<<24;
std::cout<<N<<'\n';
int64_t c = 1;
for(int64_t i=1; i<N; i++) {
for(int64_t j=1; j<N; j++) {
int64_t s_id0 = start_index(i, j, c);
int64_t s_id1 = start_index_new(i, j, c);
assert(s_id0 == s_id1);
}
}
for(int64_t i=1; i<N; i++) {
for(int64_t j=1; j<N; j++) {
int64_t e_id0 = end_index(i, j, c);
int64_t e_id1 = end_index_new(i, j, c);
assert(e_id0 == e_id1);
}
}
}
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84010
Approved by: https://github.com/ezyang
2022-08-29 22:53:40 +00:00
kshitij12345
7a8152530d
move pooling test from test_nn to test/nn/test_pooling ( #83915 )
...
Ref #63085
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83915
Approved by: https://github.com/albanD
2022-08-24 16:17:50 +00:00