Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66006
Previously, this was resulting in a key collision and a crash.
ghstack-source-id: 139342089
Test Plan: Ran webdriver test locally.
Reviewed By: dhruvbird
Differential Revision: D31281092
fbshipit-source-id: f31311726c681d6d7e0504ff8e84c888af9054f0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60702
- Instead of traversing and counting all tensor memory, collect a map
from storage key to storage info while traversing. Add up sizes at
the end to avoid double counting.
- Count tensor memory from constants as well.
Test Plan: Ran webdriver test.
Reviewed By: dhruvbird
Differential Revision: D29380396
Pulled By: dreiss
fbshipit-source-id: 6d0fd66f677fe23c851aa218387aa4dc59502b1e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60701
The unit test previously only tested that the dump could complete
successfully. It was not able to verify that any JS worked properly.
Now we can test the JS as long as webdriver is installed.
Tweaked the implementation of Hider a bit to make it easier for tests to
find and open them.
I disabled the tests by default since I don't want to deal with
webdriver in CI. Enable them with the environment variable
RUN_WEBDRIVER=1.
We could make the tests use headless mode, but it's kind of fun to watch
them run.
Add a test to verify that tensor memory computation is working for the
simple model.
Test Plan: Ran the test.
Reviewed By: dhruvbird
Differential Revision: D29380398
Pulled By: dreiss
fbshipit-source-id: f19d0b05d79ad5a8231e85422976f1889e021c89
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60700
Test Plan:
Dumped a model with a lot of constants (qconvs produced by optimizing).
Was able to see them rendered nicely.
Reviewed By: dhruvbird
Differential Revision: D29380400
Pulled By: dreiss
fbshipit-source-id: c951508b92bb2717591dd173282157e1a40a30bd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57661
Thie Pickle "specification" (pickletools.py) states that the argument to
a BINUNICODE opcode must be UTF-8 encoded. However, if a PyTorch custom
class returns a non-UTF-8 std::string from its pickle method the
libtorch Pickler will write it to the output pickle without complaining.
Python's _Unpickler (the Python implementation of Unpickler) always
throws an exception when trying to deserialize these invalid pickles.
We still want to be able to dump these pickle files. Update
DumpUnpickler to create its own opcode dispatch table (initialized as a
clone of the _Unpickler dispatch table) and patch in a custom function
for the BINUNICODE op. We try to emulate the default behavior, but any
UnicodeDecodeError is caught and replaced with a dummy object. This
could violate the assumptions of a user that expects a str in that
position, so we disable this behavior by default.
Update model_dump to recognize this special object and allow it to be
rendered.
Test Plan: Dumped and viewed a model with an invalid string in an object state.
Reviewed By: malfet
Differential Revision: D28531392
Pulled By: dreiss
fbshipit-source-id: ab5aea20975a0ef53ef52a880deaa2c5a626e4a2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57658
Since there is no Python change here and we only do the analysis when
rendering the open section, this should have no impact on page size or
load time! (Well, a constant impact on page size due to the added
code.) Before I made it lazy, I observed that it increased load time by
over 100ms for a large model.
Test Plan: Dumped a CUDA model and saw the size summary.
Reviewed By: malfet
Differential Revision: D28531394
Pulled By: dreiss
fbshipit-source-id: f77012b7bab069de861a4ba23486c665e1306aa0
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57657
Test Plan: Clicked around a model with some dicts in it.
Reviewed By: malfet
Differential Revision: D28531397
Pulled By: dreiss
fbshipit-source-id: 069690f147e91eadd76fec5f5ca4eec057abcb98
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57655
Now lots of code is shared between tensor and qtensor rendering. Net
lines of code is actually +1, but it should result in a savings if/when
we implement some of those todos.
Test Plan: Clicked around in Chrome.
Reviewed By: malfet
Differential Revision: D28531395
Pulled By: dreiss
fbshipit-source-id: 190a04ed587b54d27f3410246763cd636c0634be
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57654
I learned how to use children in React/Preact. :) Now it's not
necessary to give every hidable section its own id and synchonize the
"shown=false" with "style='display:none;'".
This also means that the hidden elements aren't rendered to the DOM
unless the hider is open.
Test Plan: Clicked around in Chrome.
Reviewed By: malfet
Differential Revision: D28531393
Pulled By: dreiss
fbshipit-source-id: bc86c823ae4b7e80c000f50c5429d89dff6ae64d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56868
See __init__.py for a summary of the tool.
The following sections are present in this initial version
- Model Size. Show the total model size, as well as a breakdown by
stored files, compressed files, and zip overhead. (I expect this
breakdown to be a bit more useful once data.pkl is compressed.)
- Model Structure. This is basically the output of
`show_pickle(data.pkl)`, but as a hierarchical structure.
Some structures cause this view to crash right now, but it can be
improved incrementally.
- Zip Contents. This is basically the output of `zipinfo -l`.
- Code. This is the TorchScript code. It's integrated with a blame
window at the bottom, so you can click "Blame Code", then click a bit
of code to see where it came from (based on the debug_pkl). This
currently doesn't render properly if debug_pkl is missing or
incomplete.
- Extra files (JSON). JSON dumps of each json file under /extra/, up to
a size limit.
- Extra Pickles. For each .pkl file in the model, we safely unpickle it
with `show_pickle`, then render it with `pprint` and include it here
if the size is not too large. We aren't able to install the pprint
hack that thw show_pickle CLI uses, so we get one-line rendering for
custom objects, which is not very useful. Built-in types look fine,
though. In particular, bytecode.pkl seems to look fine (and we
hard-code that file to ignore the size limit).
I'm checking in the JS dependencies to avoid a network dependency at
runtime. They were retrieved from the following URLS, then passed
through a JS minifier:
https://unpkg.com/htm@3.0.4/dist/htm.module.js?modulehttps://unpkg.com/preact@10.5.13/dist/preact.module.js?module
Test Plan:
Manually ran on a few models I had lying around.
Mostly tested in Chrome, but I also poked around in Firefox.
Reviewed By: dhruvbird
Differential Revision: D28020849
Pulled By: dreiss
fbshipit-source-id: 421c30ed7ca55244e9fda1a03b8aab830466536d