Update SECURITY.md

This commit is contained in:
Sadeed pv 2022-07-26 10:39:12 +04:00 committed by GitHub
parent fb568f9e29
commit d38cb2f6a7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -67,7 +67,7 @@ process untrusted inputs assuming there are no bugs. There are two main reasons
to not rely on this: First, it is easy to write models which must not be exposed to not rely on this: First, it is easy to write models which must not be exposed
to untrusted inputs, and second, there are bugs in any software system of to untrusted inputs, and second, there are bugs in any software system of
sufficient complexity. Letting users control inputs could allow them to trigger sufficient complexity. Letting users control inputs could allow them to trigger
bugs either in TensorFlow or in depending libraries. bugs either in TensorFlow or its dependencies.
In general, it is good practice to isolate parts of any system which is exposed In general, it is good practice to isolate parts of any system which is exposed
to untrusted (e.g., user-provided) inputs in a sandbox. to untrusted (e.g., user-provided) inputs in a sandbox.