mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Fix typos under caffe2 directory (#87840)
This PR fixes typos in `.md` files under caffe2 directory Pull Request resolved: https://github.com/pytorch/pytorch/pull/87840 Approved by: https://github.com/kit1980
This commit is contained in:
parent
e8a97a3721
commit
daff5d3556
|
|
@ -1,7 +1,7 @@
|
|||
libopencl-stub
|
||||
==============
|
||||
|
||||
A stub opecl library that dynamically dlopen/dlsyms opencl implementations at runtime based on environment variables. Will be useful when opencl implementations are installed in non-standard paths (say pocl on android)
|
||||
A stub opencl library that dynamically dlopen/dlsyms opencl implementations at runtime based on environment variables. Will be useful when opencl implementations are installed in non-standard paths (say pocl on android)
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ This doc keeps tracking why operators are not covered by the testcases.
|
|||
|Atan|||💚OK|
|
||||
|AveragePool||OK|💚OK|
|
||||
|BatchNormalization||OK|💚OK|
|
||||
|Cast|Yes||💔Need extendtion|
|
||||
|Cast|Yes||💔Need extension|
|
||||
|Ceil|Yes||💚OK|
|
||||
|Clip|Yes|OK|💚OK|
|
||||
|Concat|Yes|OK|💚OK|
|
||||
|
|
|
|||
|
|
@ -19,8 +19,8 @@ To compute the quantization parameters of activation tensors, we need to know th
|
|||
|
||||
* Floating-point requantization
|
||||
|
||||
Unlike gemmlowp using fixed-point operations that emulates floating point operations of requantization, fbgemm just uses single-precison floating-point operations. This is because in x86 just using single-precision floating-point operations is faster. Probably, gemmlowp used pure fixed-point operations for low-end mobile processors. QNNPACK also has similar constraints as gemmlowp and provides multiple options of requantization implementations.
|
||||
The users could modify the code to use a different requantization implementation to be bit-wise idential to the HW they want to emulate for example. If there're enough requests, we could consider implementing a few popular fixed-point requantization as QNNPACK did.
|
||||
Unlike gemmlowp using fixed-point operations that emulates floating point operations of requantization, fbgemm just uses single-precision floating-point operations. This is because in x86 just using single-precision floating-point operations is faster. Probably, gemmlowp used pure fixed-point operations for low-end mobile processors. QNNPACK also has similar constraints as gemmlowp and provides multiple options of requantization implementations.
|
||||
The users could modify the code to use a different requantization implementation to be bit-wise identical to the HW they want to emulate for example. If there're enough requests, we could consider implementing a few popular fixed-point requantization as QNNPACK did.
|
||||
|
||||
* 16-bit accumulation with outlier-aware quantization
|
||||
|
||||
|
|
|
|||
|
|
@ -133,7 +133,7 @@ If you're running this all on a cloud computer, you probably won't have a UI or
|
|||
|
||||
First configure your cloud server to accept port 8889, or whatever you want, but change the port in the following commands. On AWS you accomplish this by adding a rule to your server's security group allowing a TCP inbound on port 8889. Otherwise you would adjust iptables for this.
|
||||
|
||||
Next you launch the Juypter server.
|
||||
Next you launch the Jupyter server.
|
||||
|
||||
```
|
||||
jupyter notebook --no-browser --port=8889
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user