pytorch/test/cpp
PyExtreme e1d13f4f8b C++ API parity: NLLLoss & CrossEntropyLoss (#29812)
Summary:
Hi yf225 , I have added **NLLLoss and CrossEntropyLoss.**
```

Also, while using log_softmax in cross_entropy_loss, I am getting an error
../caffe2/../torch/csrc/api/include/torch/nn/functional/loss.h:537:63: error: no matching function for call to  log_softmax(const at::Tensor&)’
     const Tensor& log_softmax_input = torch::log_softmax(input);

aten/src/ATen/Functions.h:5551:22: note: candidate: at::Tensor at::log_softmax(const at::Tensor&, int64_t, c10::optional<c10::ScalarType>)
 static inline Tensor log_softmax(const Tensor & self, int64_t dim, c10::optional<ScalarType> dtype) {
                      ^~~~~~~~~~~
aten/src/ATen/Functions.h:5551:22: note:   candidate expects 3 arguments, 1 provided
```

I think the other two parameters should be optional as in python frontend(shown in documentation here at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.log_softmax ). Rest, there were no errors in build and tests have passed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29812

Differential Revision: D18548249

Pulled By: yf225

fbshipit-source-id: 2ab350abd2a6f498d4dba2345f51ad87471f3038
2019-11-16 10:49:09 -08:00
..
api C++ API parity: NLLLoss & CrossEntropyLoss (#29812) 2019-11-16 10:49:09 -08:00
common Trim libshm deps, move tempfile.h to c10 (#17019) 2019-02-13 19:38:35 -08:00
dist_autograd Rename dist_autograd_context and dist_autograd_container. (#29696) 2019-11-14 14:49:34 -08:00
jit Revert D18499600: Add overload name to JIT prim operators. 2019-11-15 18:36:17 -08:00
__init__.py Add train() / eval() / is_training() to C++ ScriptModule API (#16044) 2019-02-01 13:07:38 -08:00