Grammatical update of tech docs (#61547)

Summary:
Added some minor grammatical updates to the 'Complex Numbers' docs.

![Screenshot (180)](https://user-images.githubusercontent.com/75036632/125342884-0b952500-e373-11eb-9e63-410ff31e6c21.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61547

Reviewed By: zou3519

Differential Revision: D29677361

Pulled By: H-Huang

fbshipit-source-id: 78222310a755911192905a8f52aa0ae325900006
This commit is contained in:
Jinay Dagli 2021-07-14 14:00:58 -07:00 committed by Facebook GitHub Bot
parent 5a5c7f563d
commit a9c3580080

View File

@ -4,12 +4,12 @@ Complex Numbers
===============
Complex numbers are numbers that can be expressed in the form :math:`a + bj`, where a and b are real numbers,
and *j* called the imaginary unit, which satisfies the equation :math:`j^2 = -1`. Complex numbers frequently occur in mathematics and
engineering, especially in signal processing. Traditionally many users and libraries (e.g., TorchAudio) have
and *j* is called the imaginary unit, which satisfies the equation :math:`j^2 = -1`. Complex numbers frequently occur in mathematics and
engineering, especially in topics like signal processing. Traditionally many users and libraries (e.g., TorchAudio) have
handled complex numbers by representing the data in float tensors with shape :math:`(..., 2)` where the last
dimension contains the real and imaginary values.
Tensors of complex dtypes provide a more natural user experience for working with complex numbers. Operations on
Tensors of complex dtypes provide a more natural user experience while working with complex numbers. Operations on
complex tensors (e.g., :func:`torch.mv`, :func:`torch.matmul`) are likely to be faster and more memory efficient
than operations on float tensors mimicking them. Operations involving complex numbers in PyTorch are optimized
to use vectorized assembly instructions and specialized kernels (e.g. LAPACK, cuBlas).