This should be more of a fair A/B test so the timings aren't affected by having different load on your system when testing the two alternatives.
(cherry picked from commit c0007d56e9)
Previously, the extract-components script would create the same number of layers of composites as the page it captures, but it would output a new class for each time any composite is used (since we don't want to replicate all the component logic).
I changed the script to output a single type for each type in the input -- and each generated component takes an index for which output it should return. This should be closer to how the original code behaves, especially with respect to VM function call lookups where the amount of polymorphism makes a difference.
I re-recorded the benchmarks with the new scripts. They run significantly faster:
```
Comparing old.txt (control) vs new.txt (test)
Significant differences marked by ***
% change from control to test, with 99% CIs:
* ssr_pe_cold_ms_jsc_jit
% change: -41.73% [-43.37%, -40.09%] ***
means: 39.3191 (control), 22.9127 (test)
* ssr_pe_cold_ms_jsc_nojit
% change: -44.24% [-46.69%, -41.80%] ***
means: 45.8646 (control), 25.5764 (test)
* ssr_pe_cold_ms_node
% change: -45.61% [-47.38%, -43.85%] ***
means: 90.1118 (control), 49.0116 (test)
```
This is probably in part due to the changes here, but also the page I captured has changed somewhat in the meantime and there seem to be slightly fewer components in the hierarchy, so they're not really comparable. But going forward we can use this benchmark which should be more accurate. I also included an identical copy that uses stateless functional components so we can test optimizations to those later.
(cherry picked from commit e5513eceff)
Running this is left as an exercise for the reader, since my measure.py isn't designed for this at present. But something like this might work:
```diff
diff --git a/scripts/bench/measure.py b/scripts/bench/measure.py
index 4cedf47..627ec97 100755
--- a/scripts/bench/measure.py
+++ b/scripts/bench/measure.py
@@ -79,15 +79,12 @@ def _measure_ssr_ms(engine, react_path, bench_name, bench_path, measure_warm):
if (typeof React !== 'object') throw new Error('React not laoded');
report('factory_ms', END - START);
- globalEval(readFile(ENV.bench_path));
- if (typeof Benchmark !== 'function') {
- throw new Error('benchmark not loaded');
- }
+ globalEval("bm = (function(){" + readFile("bench-createclass-madman.js") + "})");
+ bm();
var START = now();
- var html = React.renderToString(React.createElement(Benchmark));
- html.charCodeAt(0); // flatten ropes
+ bm();
var END = now();
- report('ssr_' + ENV.bench_name + '_cold_ms', END - START);
+ report('cc_' + ENV.bench_name + '_cold_ms', END - START);
var warmup = ENV.measure_warm ? 80 : 0;
var trials = ENV.measure_warm ? 40 : 0;
@@ -119,7 +116,7 @@ def _main():
return 1
react_path = sys.argv[1]
- trials = 30
+ trials = 60
sys.stderr.write("Measuring SSR for PE benchmark (%d trials)\n" % trials)
for i in range(trials):
for engine in [
@@ -132,7 +129,7 @@ def _main():
sys.stderr.flush()
sys.stderr.write("\n")
- trials = 3
+ trials = 0#3
sys.stderr.write("Measuring SSR for PE with warm JIT (%d slow trials)\n" % trials)
for i in range(trials):
for engine in [
```