Today, I found and toyed around a bit with jsperf.com by Mathias Bynens, which does much of the boring setup job for you, when it comes to micro-benchmarking some snippet of javascript versus another, assuming the thing you want to test is synchronous, and doesn't involve complex preconditions. (I wonder what makes Array copying using a saved Array.prototype.slice significantly slower than the other more evenly performance-matched variants?) As should always be stated when mentioning benchmarks and optimization, this kind of micro-benchmarking is less relevant than finding your code's hot spots and picking better algorithms where relevant. That said, it's still academic fun to play with this kind of thing.
For this kind of small test, jsperf.com does all the Browserscope setup work for you, and lets others improve (fork) your test after the fact, adding versions you didn't think of yourself that do the same thing. I have a vague plan of trying to make it run some performance tests for a little code snippet I wrought up recently that does a deep copy of a nested (JSON:able) structure from a parent window where there may be crud on
Object.prototype
(the code should run in an iframe free of such), to benchmark vs a mere JSON.parse(JSON.stringify(taintedObject)
, which would also clean away the crud:var _slice = [].slice, _toString = Object.prototype.toString;
function array(arrayish) {
return _slice.call(arrayish, 0);
}
function isArray(obj) {
return '[object Array]' === _toString.call(obj);
}
// Take a nested structure from a hostile window (presumably full of crap in its
// Object.prototype, et cetera) and return a cleaned-up version with this window
// object's (pure) Array and Object constructors.
function deepClone(obj) {
if ('object' !== typeof obj) return obj;
if (null === obj) return null;
if (isArray(obj)) return array(obj).map(deepClone);
var clean = {};
for (var key in obj)
if (obj.hasOwnProperty(key))
clean[key] = deepClone(obj[key]);
return clean;
}
What surprised me about the above code, when testing it on the kinds of payloads it would typically handle (data somewhat heavy on largeish strings), was that it actually was a bit faster than using the browser native JSON codecs, whereas, for larger inputs, native code always won out. Sometimes you just have to test stuff, to find out what wins.