2010-12-26

Vintage Firefox

When I occasionally need to check how far back a feature was introduced in Firefox, it usually turns out I only have the last few versions installed, and takes a while before I find where mozilla.org hit them, so here goes: a complete archive of the latest (mac -- tweak the urls or browse yourself for Linux, Windows or otherwise) release of every major version of Firefox from 0.8 to date (at current time of writing, 3.6.13 is the most recent Firefox released) off of ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/, where mozilla.org hosts them, and the Firebug releases you need to do any useful web development work with them:


(Corresponding Gecko and Firebug versions, the second and third most messy to find Firefox entity, from the Gecko page on MDC and the install.rdf of the corresponding Firebug xpi:s archived at getfirebug.com.)

Never ever download and install Firefox from any other site! There is a huge, shady "pay per install" market of people bundling other people's software with malware (bot nets, typically); read all about it from Symantec (pdf) or Kevin Stevens (also pdf, the latter from Black Hat DC 2010). This goes for all executable programs you download and install on your machine; if you have no reason to trust the source, get it from a source you do trust.

2010-12-12

Opera 11 extensions @ Add-on-Con

I attended Add-on-Con this year again (I think I missed last year's), and figured I should sum up some of the interesting and/or useful stuff I picked up on while there. What I found most gratifying was the upcoming Opera (11) add-on API, which takes on the same approach as the Chrome and Safari herd, with the clean, HTML5 / postMessage / content script based design we have grown to expect, with optional background and popup pages. (Mozilla kind of wants to get there too, but does not expect to have anything remotely near it for at least another year, and I don't think it's even on any road maps yet.)

In Opera's current version, they don't go quite as far as does Chrome with its process separation, so an extension's popup can actually read from and write to localStorage from the same origin as its background page (which lets you simplify the message passing a little if you're not aiming for easy porting). Your content script runs in its own global object, but it has a window reference to the current window object, as the page sees it, and in the interest of not automatically leaking your script's guts into the page's global scope, it is not in the inheritance chain, so you actually have to use long references like window.document and window.localStorage to access them. (I tried assigning this.__proto__ = window;, which in theory should import all window properties in my content script's scope, and while that seemed to allow the former, it still failed on the latter, so I guess I hit unsupported stuff or beta bugs.)

To try it out, I made my github improved user script an Opera 11 extension. Run rake to build the zipped-up oex from the content script if you check out or fork the codebase, which puts the content script in includes/ (all scripts in includes/ with a // ==UserScript== block with Greasemonkey-style @include / @exclude rules run, not at the DOMContentLoaded event, but at page start, so you have to set your own DOMContentLoaded listener in a browser forked section if you want to execute under the same page conditions under Opera as in Firefox or a Chrome content script set to load at page ready).

Touting standards, Opera decided that a browser extension is essentially a Widget, and reused the W3C Widget standard they helped coin a while back, making its manifest sit in config.xml in the oex (a zip file) root directory (and emphasized that it also means you have to have an index.html next to it – the background page – but that it can be empty, if you like). For instance:

<?xml version="1.0" encoding="utf-8"?>
<widget xmlns="http://www.w3.org/ns/widgets" version="1.0.0-0">
  <name>Github improved</name>
  <icon src="icon.png"/>
  <author href="http://ecmanaut.blogspot.com/">Johan Sundström</author>
  <description xml:lang="en">Adds github changeset unfolding and other site improvements.</description>
</widget>

This also means that you can use Widget storage and config-based preference init, if you like. It indeed even seems to be the recommended way, as that gets you unlimited storage (actually: the same total cap Opera sets for itself globally), whereas your background page's localStorage gets subjected to the same arbitrary per-domain storage size cap that web sites are.

As for extension distribution, they of course have their own extension gallery you may want to submit your things to, and like all human-driven approval processes, they have a slightly anal process to get through first. Essentially, the less you can state about your add-on, the fewer opportunities you'll have to fail their test criteria; I was advised by the presenters not to add an add-on homepage, for example.

Currently, every submission attempt of yours in the approval process requires you to update your version number, so you can't aim for releasing a final 1.0 and treat all the attempts on the way there as release candidate 1 through N; my test add-on went through three revisions before coming out, and ended up named 1.0.0-0 in the end. It also was required to list change notes, that are reflected on the download page even when for an add-on that never existed before. Badly implemented good intentions also featured in the insistent numerous-stage wizard walking you through many steps every time (including screen shots, icons, et c), but there is at least the option to copy data from a previous version.

I think I may make Opera extensions of some of the extension-worthy hacks I do, but having tried the Chrome extension gallery's really smooth process, showing how it should be done, I don't think I'll have the patience to jump through Opera's hoops very often unless they improve notably.

To me, the bottom line is that Opera's add-on system doesn't make it prohibitively difficult to support Opera too, if you are already making an add-on for Chrome that does not rely on lots of deeper-integration Chrome API:s. It still surprises me that while they have the best technical user javascript implementation (and first too) a browser has ever had, they gave it the worst UI (a config switch listing a directory into which you yourself drop user scripts manually, after having modded them to add a DOMContentLoaded event listener to start the main part of the script) ever. Their js hooks still thorougly beat both Greasemonkey and Chrome, yet the lack of installation when visiting a *.user.js file locks that feature set in for only the really expert of browser users.

2010-09-19

Comfy browser performance testing

I have been poking around with some performance tests and profiling lately, first with Steve Souders' Browserscope, which gets the job done, at some convenience cost of setting up a publicly hosted web page and tying it to a Google account (that you might have to first create), and, for loading performance, different ways of organizing script tags, images, iframes, style sheets and so on, Cuzillion, by same author.

Today, I found and toyed around a bit with jsperf.com by Mathias Bynens, which does much of the boring setup job for you, when it comes to micro-benchmarking some snippet of javascript versus another, assuming the thing you want to test is synchronous, and doesn't involve complex preconditions. (I wonder what makes Array copying using a saved Array.prototype.slice significantly slower than the other more evenly performance-matched variants?) As should always be stated when mentioning benchmarks and optimization, this kind of micro-benchmarking is less relevant than finding your code's hot spots and picking better algorithms where relevant. That said, it's still academic fun to play with this kind of thing.

For this kind of small test, jsperf.com does all the Browserscope setup work for you, and lets others improve (fork) your test after the fact, adding versions you didn't think of yourself that do the same thing. I have a vague plan of trying to make it run some performance tests for a little code snippet I wrought up recently that does a deep copy of a nested (JSON:able) structure from a parent window where there may be crud on Object.prototype (the code should run in an iframe free of such), to benchmark vs a mere JSON.parse(JSON.stringify(taintedObject), which would also clean away the crud:

var _slice = [].slice, _toString = Object.prototype.toString;
function array(arrayish) {
  return _slice.call(arrayish, 0);
}

function isArray(obj) {
  return '[object Array]' === _toString.call(obj);
}

// Take a nested structure from a hostile window (presumably full of crap in its
// Object.prototype, et cetera) and return a cleaned-up version with this window
// object's (pure) Array and Object constructors.
function deepClone(obj) {
  if ('object' !== typeof obj) return obj;
  if (null === obj) return null;
  if (isArray(obj)) return array(obj).map(deepClone);
  var clean = {};
  for (var key in obj)
    if (obj.hasOwnProperty(key))
      clean[key] = deepClone(obj[key]);
  return clean;
}

What surprised me about the above code, when testing it on the kinds of payloads it would typically handle (data somewhat heavy on largeish strings), was that it actually was a bit faster than using the browser native JSON codecs, whereas, for larger inputs, native code always won out. Sometimes you just have to test stuff, to find out what wins.

2010-09-13

JSONView with copy & paste support

Last weekend I picked up Ben Hollis' JSONView extension to tinker it to keep JSON viewed through it copy-and-paste:able, both as a whole and in part. It was rather fun work and within some few hours I had a patch that did the trick, and a few other ones to boot:



I sent Ben a patch, but not having heard anything back, I figured I might as well share the results here too, as it's all MIT licensed goodness (as is my patched version, of course). You can either download an installable xpi build, or fork my repository from github.

My edits are all on a branch named after my github user -- a practice I can recommend, when mirroring some svn upstream repository (with git-svn, for instance); it simplifies the process of tracking vendor updates to a mere git svn rebase on master (for you -- git-svn, sadly, is not able to push its metadata about the non-git upstream to github) and lets other people reuse your pristine vendor branch for their own purposes.

Enjoy your easier-to-handle JSON! I guess next step would be to integrate these improvements into the Chrome fork Jamie Wilkinson is maintaining. Any takers?

2010-08-23

Friendlier github commits pages

I figured I should announce the news of a much improved github commits page I have been tinkering with for a while, that lets you unfold full details about commits (individually, or in batch) without leaving the page. A screenshot of what the Greasemonkey master branch commits might look like with a few unfolded:

screenshot of github commits page with the user script active

Usage is intuitive; just click anywhere in a commit that isn't already a link, and the commit will load in place (unless already loaded) and unfold its whole changeset, or refold itself again, when clicked a second time. Similarly, individual files changed can be folded and unfolded much the same way, though only by clicking their headers (I often want to copy and paste stuff from diffs, so no click magic there). To unfold or refold every commit on the page, either hit "f", or click the "(un)fold all commits" link at the top or bottom right of the page.

This user script (install link, userscript.org page with docs and complete change log) is incidentally compatible with extensions like AutoPagerize too, in case you prefer scrolling for content to finding and clicking "Older" (h) and "Newer" (l). And if you just want to try it out without installing anything, try this bookmarklet on any github commits page.

And it's all done by a hundred lines of jQuery, about a dozen lines of css and another dozen lines of Greasemonkey / Chrome portability code that breaks out of their respective sandbox environments to run what the script needs to run in the context of the web page itself.

That latter piece is useful too, by the way, modeled on a trick Anthony Lieuallen came up with for the Greasemonkey wiki at some point. This is what it looks like, including full docs:


The above-referred jQuery is already loaded by github's page template, so I don't need to bother grabbing or shipping one of my own. If you are hacking a more hostile site, and run Greasemonkey you might want to add a // @require http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js line to your script header instead (I haven't tested, but if you try to run jQuery in the mozilla javascript sandbox anything beyond really basic stuff fails due to the common pitfalls that jQuery, sanely, does not much attempt to evade).

All above code is freely poachable (read: MIT license) at github, with as much or little attribution as you like. I would of course be especially happy to see the github people adopt this for their default page templates, as it's really useful stuff. Share and enjoy!

2010-07-28

Showing inline HTML comments of Paul Graham's

I browsed a few of the older of Paul Graham's Essays tonight, dug up my Paul Graham click-to-inline footnotes user-script, which wasn't installed in this Google Chrome profile (install link here), and peeked at the source of a few of the essays which it still doesn't grok.

In doing so, I happened upon some HTML comments -- the next level of shaved-off cutting-floor material left around for our prying eyes, of you will -- many of which were interesting, much like his foot-notes. So I hacked up a new version of the script, which inlines those too, showing them as little <a>, <b>, <c>, onward -- expanding to something looking like an HTML comment when clicked.

Share and enjoy!

To test drive the new feature, you might want to re-read Why Smart People Have Bad Ideas, A Unified Theory of VC Suckage and The Age of the Essay. (And before anyone mentions it: no, I didn't actually get to making it augment his old-style [non-]markup for footnotes. Maybe next time. ;-)

2010-07-14

Optimizing SVGs

The other night I came across this cool Tree of Life page, featuring some pdfs and images of the family relationship of all life on Earth. Great stuff. Among them, this simplified rendition divided into about a hundred sub-families, and their relations:


You see our really ancient common heritage starting at 0 radians, progressing through evolution towards the really highly evolved creatures at two pi radians; birds, crocodiles, turtles and (you are here!) mammals (but in reverse order; sorry -- us mammals are not the last cry in evolution in all ways conceivable :-).

I liked it, but it felt wrong that it was trapped in a pdf; this kind of thing should really be a Scalable Vector Graphics image (SVG, henceforth) with cut-and-pastable text, and both readable and hackable right in the page source, for people like you and me that like to poke around in things.

So I made an exercise turning it into a somewhat nice SVG, to see both how small I could make it, without much effort, and where browsers are at in terms of rendering an inline SVG, these days. I haven't actually tested yet, so it'll be a fun surprise for me too, upon publishing this post. And if your browser doesn't render it, you still had the rasterized version above, or the source pdf (35008 bytes long).

Oh, and for the curious, there's a public git repository of all the changes on github, one step at a time, from the first version (where it's helpful to have a friend that has Adobe Illustrator, for instance, to do an initial machine translation of the pdf to a workable yet messy SVG). For reference, this page does not embed the minimized end result, which weighed in at 14852 bytes (or 6038, gzipped to an svgz).

(I consider those cheating, as the line data itself has been compressed somewhat beyond the point where it's still hackable.)

If you want to play around with this kind of thing, and get familiar with hand-editing SVG files, I can whole-heartedly recommend Sam Ruby's great library of sub-kilobyte hand-made SVG:s. While I can't find a statement to attest to it at the moment, I believe they are all freely MIT licensed (I think I asked him in person at SVG Open 2009), encouraging you to learn from and play with them. It is a great resource if you want to start playing with this yourself and want to pick up on some of the tricks of the trade, since they, on average, contain pretty much 100% signal, 0% noise.

Oh, and the SVG specification when you are curious about something specific. If you want to learn a minimal subset only that can do almost everything, look at the <path d="turtle graphics here"/> attribute.

And here is the outcome of my own craftsmanship, for the browsers that get it:

Spirochaetes Chlamydias Hyperthermophilic bacteria Cyanobacteria Low-GC Gram-positives High-GC Gram-positives Deinococcus/Thermus Proteobacteria Crenarchaeota Euryarchaeota Haptophytes Brown algae Diatoms Oomycetes Dinoflagellates Apicomplexans Ciliates Eudicots Monocots Magnoliids Star anise Water lilies Amborella Conifers Gnetophytes Ginkgo Cycads Ferns Horsetails Whisk ferns Club mosses and relatives Hornworts Mosses Liverworts Charales Coleochaetales Chlorophytes Red Algae Glaucophytes Kinetoplastids Euglenids Heteroloboseans Parabasalids Diplomonads Foraminiferans Cercozoans Radiolarians Amoebozoans Club Fungi Sac Fungi Arbuscular Mycorrhizal Fungi "Zygospore Fungi" "Chytrids" Microsporidia Choanoflagellates Glass sponges Demosponges Calcareous sponges Placozoans Ctenophores Cnidarians Bryozoans Flatworms Rotifers Ribbon worms Brachiopods Phoronids Annelids Mollusks Arrow worms Priapulids Kinorhynchs Loriciferans Horsehair worms Nematodes Tardigrades Onychophorans Chelicerates Myriapods Crustaceans Hexapods Echinoderms Hemichordates Cephalochordates Urochordates Hagfishes Lampreys Chondrichthyans Ray-finned fishes Lobe-finned fishes Lungfishes Amphibians Mammals Turtles Lepidosaurs Crocodilians Birds

Unfortunately Blogger intersperses it with <br> tags if I leave the new-lines in, so see github for a cleaner version. No luck with my current set of browsers, with at least this doctype and HTML version. It does degrade to showing the text content of all the families, though, which a PDF wouldn't.

2010-07-11

Google styleguides for JSON APIs

I just eyed through Google's dos and don'ts style guide for when exporting JSON APIs. Overall it's pretty good, ranging from the very basics of "abide by the JSON specification" (though stated at depth, presumably for the JSON illiterate, with all implications of what data types are available, what syntax is legal and the like) to how to do naming, grouping of properties, how to introduce and deprecate identifiers, what to leave out, how to represent what kinds of data, and so on.

It of course doesn't guarantee that the outcome will be good APIs (the JSON exported by Google Spreadsheets, at least in the incarnations I peeked at some years ago, was an absolutely horrible auto-translation from XML that even mangled parts of the data due to its imperfect representation of a data grid, for instance), but it prevents does protect against many needless pitfalls.

Time


Not all of its tips are great, though. The rest of this post is a rant about time, and how it's more complicated than you think (unless you have ever run across this). I specifically want to warn about its recommendation on Time Duration Property Values (my emphasis that it's talking about amounts of time rather than timestamps, which ISO 8601 is great for), which it suggests to be encoded ISO 8601 style. Example (comments are of course not part of the output):

{
  // three years, six months, four days, twelve hours,
  // thirty minutes, and five seconds
  "duration": "P3Y6M4DT12H30M5S"
}

Don't do this!

That is a daft idea and/or example. Think about it for a moment. If you truly want to convey the length of a duration for a period of three years, six months, four days, twelve hours, thirty minutes and five seconds, the total number of seconds of that should be computable with perfect precision, right? To this application, after all (whatever it is) -- the number of days, hours, minutes -- and even those last five seconds -- are significant, so we should get them right.

Here be dragons. Human readouts of time durations like the one above don't convey that information. If you talk about durations, you have to pick one, well-defined unit of time (or -- less usefully -- a number of units that translate to each other by well-defined rules, requiring no additional data inputs) of time, and stick to that one.

I'd recommend picking either days, seconds, milliseconds or nanoseconds as your (one!) unit of choice, dependent on what kind of duration you represent and what kinds of likely uses it has. Declare the unit (so the property name suggests the unit, if you're kind) and stick to integers.

Because the number of seconds in three years, six months, four days, twelve hours, thirty minutes and five seconds depends on when you start counting, and/or what you mean by "year", "month", "day" and "hour" (earth-centric time is complicated -- some minutes even have more than 60 seconds).

Typically, it's in reference to some specific reference time, from which to increment the year by 3, the month number by six, the date by four days, and finally add another 12h, 30m and 5s. But we didn't get a reference time; we just have a duration. It's like a vector denoting a coordinate in a coordinate system. You can't tell what it points at, without knowing where it points from.

And humans happily think up some well-defined case like counting from midnight this January 1st, finds that it works out to becoming 2013-06-04 12:30:05 after adding, and maybe even computes it to 108041406 seconds total and believes it's a well-defined amount of time. It just isn't; those six months only just turned out to be 182 days because 2013 isn't a leap year. If we had started counting from 2009 and ended up on 2012-06-04 12:30:05, they would have wound up 183. And if we had started counting from March 1st instead of January 1st, they would have been 185. No matter how you see that, we've suddenly got all this seemingly second-precision duration -- with a fuzz margin of plus or minus a day and a half -- a range of more than 250,000 seconds, which compares most unfavourably to advertised second-precision.

And if you decide that your years are a special 365 * 24 * 60 * 60 seconds long, then ten years from now won't be July 11th, but July 8:th. Humans might disagree.

So if you have want to represent a duration of time, and you want the API consumer to know how long that duration is, pick a unit and pass an integer. And if, say, your first API version had a timestamp of the start of something and you want API v2 consumers to be able to tell how long after that it completed, pick a unit and pass an integer. And if you have good reason to believe that the consumer (maybe a human) is interested in the end time, pass start and stop timestamps. Programs consuming your JSON will have access to date functions that can compute the amount of time between them, or what date and time it will be after a fix number of given time units from a reference time.

Thank you for taking your time to indulge in thinking and talking about time over JSON APIs.

2010-07-09

List hardlinks on HFS volumes

This one goes out to all of you mac users out there.

I recently made a little hack hardlinks that takes a path (or inode) of a hardlinked file and lists the paths of all of its clones (including its own), provided it lives on an HFS volume. Usage is simple:

sudo hardlinks path

or

sudo hardlinks -c inode

You need to have the hfsdebug-lite binary Amit Singh provides installed for it to work, and if you're on a modern mac, you need to install the Rosetta tools from your MacOS X install disk to get hfsdebug to run. (That sudo is needed for hfsdebug to access the raw device of the volume -- after that, hfsdebug drops privileges to the nobody user.)

Source code:

2010-07-06

A peek at Dropbox

On a friend's suggestion, I had a peek at Dropbox for syncing directories between multiple computers, sharing files with people without posting email attachments, and the like. It's got many rather useful properties, but seems a little immature in the unix world; it is unaware of file modes and symlinks (so symlinks will show up as real files or as nothing at all, if orphaned), making it less suitable for syncing git checkouts across multiple machines, as I was hoping to. As noted in the linked thread, though, the upcoming 0.8 release will get aware of file modes, which is a good start.

What it does seem really good for, though, is auto-syncing preference files, data sets of stuff you want comfy access to wherever you are, breaking through NATs to sync stuff to home machines behind firewalls, and the like. One of the neater ideas I came across in the otherwise mostly uninteresting comments on this tips and tricks post was to set up a home machine to poll for torrent files dropped into some Dropbox-synced directory and start downloading them (to another directory, presumably), instead of doing the same via ssh.

The free plan covers 2GB data stored (plus keeping 30 days of backup history), and if you sign up through a referral link (here's mine), both signee and referrer get a quarter-gig extra quota.

Their iPhone application delivers browsability of the files you sync (your own and those others share with you -- and it should be noted that "sharing" currently implements read/write access, only, so you'll want to keep backups and/or trust sharees as you trust yourself) and lets you micromanage pictures into your on-phone picture album one by one, lets you play music and video, but not add them to your on-phone music library.

Similarly, it lets you can micromanage a picture at a time back from the device photo library to cloud storage (and connected computers), or make (now read-only), copy and email urls to any of the files in your dropbox, instead of mailing them as large attachments.

I am not surprised that it doesn't much address the main pain points of the major data interop inconvenience that is the iPhone (App store terms probably don't allow them to), but I was more than a little surprised that it doesn't measure up to what a normal rsync does yet for typical machine-to-machine file transfers yet. It does a good job as a dual-direction sync feature for basic data between yourself and non-technical friends and relatives, though.

2010-07-02

Stop gif animations in Chrome with escape

It occurred to me that one of the basic browser features still missing in Chrome, to turn off gif animations as you hit the escape key, ought to be implementable as a tiny user script through canvas:

document.addEventListener('keydown', freeze_gifs_on_escape, true);

function freeze_gifs_on_escape(e) {
  if (e.keyCode == 27 && !e.shiftKey && !e.ctrlKey && !e.altKey && !e.metaKey) {
    [].slice.apply(document.images).filter(is_gif_image).map(freeze_gif);
  }
}

function is_gif_image(i) {
  return /^(?!data:).*\.gif/i.test(i.src);
}

function freeze_gif(i) {
  var c = document.createElement('canvas');
  var w = c.width = i.width;
  var h = c.height = i.height;
  c.getContext('2d').drawImage(i, 0, 0, w, h);
  try {
    i.src = c.toDataURL("image/gif"); // if possible, retain all css aspects
  } catch(e) { // cross-domain -- mimic original with all its tag attributes
    for (var j = 0, a; a = i.attributes[j]; j++)
      c.setAttribute(a.name, a.value);
    i.parentNode.replaceChild(c, i);
  }
}

It mostly works, though for gif images loaded from another domain, we're unfortunately still out of luck. I hope Chrome will soon offer an extension flag for doing privileged canvas operations, such as drawImage, for an image loaded from another domain, like here.

That privilege could even involve a manual extension review process in the Chrome extension gallery, for all I care; it is jarring that we can't fix user experience bugs like this due to the enforced security model.


Edit: As suggested in the tip below, we don't really need to .toDataURL the image, although that gives the best results on pages that apply css styling to img tags that we won't inherit to the canvas tag. The script has been updated to work everywhere; direct install link here.

2010-06-13

Element.pageX and Element.pageY provisions

Do you know how to compute a DOM node's page coordinates, counted in pixels from the document body's top left corner? It sounds like it would be easy, but it isn't. I think it should be. Here is a utility function getViewOffset I cleaned up and lifted out of Firebug (thus MIT licensed, so you may do pretty much whatever you like with it) in early 2008 that computes it, taking into consideration static and non-static positioning, scroll positions of every parent node, special cases for table and body nodes and (optionally not) the current document's position in its parent windows, if it lives deep down in some iframe of the top document:
function getViewOffset(node, singleFrame) {
function addOffset(node, coords, view) {
var p = node.offsetParent;
coords.x += node.offsetLeft - (p ? p.scrollLeft : 0);
coords.y += node.offsetTop - (p ? p.scrollTop : 0);

if (p) {
if (p.nodeType == 1) {
var parentStyle = view.getComputedStyle(p, '');
if (parentStyle.position != 'static') {
coords.x += parseInt(parentStyle.borderLeftWidth);
coords.y += parseInt(parentStyle.borderTopWidth);

if (p.localName == 'TABLE') {
coords.x += parseInt(parentStyle.paddingLeft);
coords.y += parseInt(parentStyle.paddingTop);
}
else if (p.localName == 'BODY') {
var style = view.getComputedStyle(node, '');
coords.x += parseInt(style.marginLeft);
coords.y += parseInt(style.marginTop);
}
}
else if (p.localName == 'BODY') {
coords.x += parseInt(parentStyle.borderLeftWidth);
coords.y += parseInt(parentStyle.borderTopWidth);
}

var parent = node.parentNode;
while (p != parent) {
coords.x -= parent.scrollLeft;
coords.y -= parent.scrollTop;
parent = parent.parentNode;
}
addOffset(p, coords, view);
}
}
else {
if (node.localName == 'BODY') {
var style = view.getComputedStyle(node, '');
coords.x += parseInt(style.borderLeftWidth);
coords.y += parseInt(style.borderTopWidth);

var htmlStyle = view.getComputedStyle(node.parentNode, '');
coords.x -= parseInt(htmlStyle.paddingLeft);
coords.y -= parseInt(htmlStyle.paddingTop);
}

if (node.scrollLeft)
coords.x += node.scrollLeft;
if (node.scrollTop)
coords.y += node.scrollTop;

var win = node.ownerDocument.defaultView;
if (win && (!singleFrame && win.frameElement))
addOffset(win.frameElement, coords, win);
}
}

var coords = { x: 0, y: 0 };
if (node)
addOffset(node, coords, node.ownerDocument.defaultView);

return coords;
}
(The optional second argument turns off recursing in parent frames, when set, so you get document-relative coordinates .)

As I recently had a use for half of it -- computing Y positions -- I hacked out separate smaller getYOffset and getXOffset versions, and when I had those, it occurred to me that these ought to be properties in the Element DOM interface and implemented behind the curtains, so we could simply write img.documentX, img.documentY, et cetera, or img.pageX and img.pageY, if we wanted the coordinates of the image, counting from the outer(most) surrounding parent window. Hacking up a mini-library for that was a breeze from these primitives:
function documentX() { return getXOffset(this, 1); }
function documentY() { return getYOffset(this, 1); }
function pageX() { return getXOffset(this); }
function pageY() { return getYOffset(this); }
Node.prototype.__defineGetter__('documentX', documentX);
Node.prototype.__defineGetter__('documentY', documentY);
Node.prototype.__defineGetter__('pageX', pageX);
Node.prototype.__defineGetter__('pageY', pageY);
So now you could write code looking like this to inspect coordinates of things hovered by the mouse:
Hovered element: <input type="text" id="node-coords" /> &lt;=
<input type="text" id="mouse-coords" />
<script src="http://ecmanaut.googlecode.com/svn/trunk/lib/getXOffset.js"></script>
<script src="http://ecmanaut.googlecode.com/svn/trunk/lib/getYOffset.js"></script>
<script>
var mouse = document.getElementById('mouse-coords');
var output = document.getElementById('node-coords');
var hovered = document.body, saved = hovered.style.outline || '';
hovered.addEventListener('mousemove', hovering, false);

function hovering(e) {
mouse.value = 'mouse @ '+ e.pageX + ', '+ e.pageY;
var node = e.target;
if (node === hovered) return;

var what = node.tagName +' @ ';
var where = node.pageX +', '+ node.pageY
output.value = what + where;

hovered.style.outline = saved;
saved = (hovered = node).style.outline;
hovered.style.outline = '1px dashed lightBlue';
}
</script>
Hovered element: <=

Until this kind of ease gets into DOM 3 or 4 (we can hope, at least), your code is better off using getViewOffset instead, though, when you wanted both properties anyway:
// ...
var coords = getViewOffset(node);
var where = coords.x + ', '+ coords.y;
// ...

2010-06-12

Google BOM feature: ms since pageload

I expect this feature has been around for quite a while already, but this is the first time I have seen it: stealthy browser object model improvements letting a web page figure out how many milliseconds ago it was loaded. It presumably works in any web browser that is Chrome or that runs the Google Toolbar:

function msSincePageLoad() {
try {
var t = null;
if (window.chrome && chrome.csi)
t = chrome.csi().pageT;
if (t === null && window.gtbExternal)
t = window.gtbExternal.pageT();
if (t === null && window.external)
t = window.external.pageT;
} catch (e) {};
return t;
}


In Chrome it (chrome.csi().pageT, that is) even reports the time with decimals, for sub-millisecond precision.

Google, this kind of browser improvements should be blogged! Maybe even documented. All I caught in a brief googling for it were two now-garbage-collected tweets by Paul Irish, leading to where it was committed to Chromium, and a screenshot of the feature in action, along with all the other related features not brought up now:



The rest of this post, about how I happened upon it myself, is probably only interesting to the insatiably curious:

Upon having grown weary of all the Chinese automated porn/malware comment spam that passes through Blogger's sub-par spam filtering to my moderation inbox, I decided to replace it with one that is maintained by a service specializing in (and presumably committed to!) blog comments: Disqus. In the process, being lazy, I decided to let their template wizard install itself in my blog template, which required dropping my old blogger template, upgrading it a few versions, and then (only required by my own discerning taste) attempting to manually weed out the worst crud from the new template (none of which was added by Disqus, I might add).

In the apparently uneditable <b:include data='blog' name='all-head-content'/> section, sat a minified version of approximately this code, which seems to look up the vertical position of some latency-testing DOM node passed to it, the first time the visitor scrolls the page, if it's above the fold (which in Blogger's world is apparently a constant 750 pixels into the page :-). And maybe other things.

(function() {
function Ticker(x) {
this.t = {};
this.tick = function tick(name, data, time) {
time = time ? time : (new Date).getTime();
this.t[name] = [time, data];
};
this.tick("start", null, x);
}

window.jstiming = {
Timer: Ticker,
load: new Ticker
};

try {
var pt = null;
if (window.chrome && window.chrome.csi)
pt = Math.floor(window.chrome.csi().pageT);
if (pt == null && window.gtbExternal)
pt = window.gtbExternal.pageT();
if (pt == null && window.external)
pt = window.external.pageT;
if (pt) window.jstiming.pt = pt;
} catch (e) {};

window.tickAboveFold = function tickAboveFold(node) {
var y = 0;
if (node.offsetParent) {
do y += node.offsetTop;
while ((node = node.offsetParent))
}
if (y <= 750) window.jstiming.load.tick("aft");
};

var alreadyLoggedFirstScroll = false;

function onScroll() {
if (!alreadyLoggedFirstScroll) {
alreadyLoggedFirstScroll = true;
window.jstiming.load.tick("firstScrollTime");
}
}

if (window.addEventListener)
window.addEventListener("scroll", onScroll, false);
else
window.attachEvent("onscroll", onScroll);
})();

Safari Reader Underwhelm

I was somewhat underwhelmed by Safari Reader, mainly on account of the enforced extra friction their designers added to its UI, presumably to make it "look nice":

  • On top (and bottom, but I personally don't mind that part), it adds a drop shadow that makes text harder to read there, as is my personal habit.

  • Even worse, on navigating with the keyboard (arrow keys, Page Down or Up, and worst of all, Home / End), it painstakingly slowly SCROLLS you there -- instead of just snapping into place as Google Chrome, for instance, would. On at least a really page such as this great current SSD disks review, it takes over a second getting from top to bottom or back, which is just massively annoying.

    This actually applies to all of Safari to a lesser degree, I just never noticed it before, as I don't usually use Safari when reading long pages. In normal browsing, it seems to do it in about nine frames (and with ugly visual blits), over the course of somewhat too long a fraction of a second (this even on my massively over-powered state-of-the-art mac pro extra-everything).

    Update: This Safari "feature" can be disabled in System Preferences / Appearance on the mac; uncheck the box "Use smooth scrolling". (Thanks, Fredrik; I was unaware of this.)

  • A slight missed opportunity: every sub-page in the page gets its own top-right discreet "Page n of m" header. That much is great. It just isn't also a permalink to that sub-page, so if you want to toss someone a link to the relevant part (to your own discussion) of the known-to-be-huge article, well, you're out of luck and have to dig it up in non-Reader mode. Unwebby!

As a statement about what we should demand of our digital readership experience, I very much appreciate the idea (yes, in the greater business reality, it's a hypocritical move to strip ads from the web with one hand, while enforcing ads on devices you stepmother with the other -- but I care more about the web). They have just been encumbered by a bit too much Apple designerism. I hope that copycats will borrow the good parts and throw away the bad. Please don't copy Apple's bugs.

2010-04-03

Shaving cycles off your Firefox addon dev cycle

When developing Firefox extensions (like Greasemonkey), some of the best invested time you'll ever spend is on stuff making your change-restart-test cycle shorter, smoother and/or more convenient to you, whatever that looks like. Here are my own top suggestions:

  • As the Firefox extension model still requires a browser restart for many changes, make sure you at least don't have to repeat the messy part of packaging your xpi and installing it again over and over again (unless that is the part you are debugging at the moment).

    Reading the page on how extensions are installed on your operating system, replace the directory with your extension's guid with either a symlink to the directory where you develop your extension, or a file containing the path to it (if, say, you work on Windows with some file system that doesn't support symbolic links). Now you can edit your code, restart Firefox, and see the effect immediately without the extra steps of building an xpi and installing it every time you change something.

    (If your extension uses the more elaborate jar double zipping build procedure, my suggestion is "don't do that while developing", as doing more, invariable also takes more time and effort.)

  • Install the QuickRestart extension, as Firefox developers don't give you access to the crucial feature of restarting your session via keyboard hot key.

  • Check out the File menu and learn the hot key.

  • Some things can be updated without a browser restart, even in Firefox! (*) Set your development profile's about:config preference nglayout.debug.disable_xul_cache = true. <= this entire page is worth a read-through

  • Maybe you're poking around with things in an overlay in browser chrome and really just wish you could have a read-eval-print console into it, kind of like Firebug's. That's what MozRepl does. Install it, and now you can telnet or netcat into your browser session (from a terminal window, or maybe emacs), after focusing which window you want to have a prompt in.

  • If you are hacking on Greasemonkey specifically, and, say, poking with any of the stuff concerning gm_scripts and what is stored there, I suggest you cd into your gm_scripts directory, run git init, git add * and git commit -a -m 'Something descriptive' so you can revert to a prior state effortlessly with a git reset --hard before restarting, when your code changed something and you want to restart from a known earlier state.

That's all for today. Doubtlessly there are tons of other things that slim up the dev cycle. Feel free to post, or better still, link to other useful things to do. The docs at MDC are unfortunately too ill organized to easily strike gold there, so web pages that aggregate the nuggets are particularly valuable resources.

(*) Mozilla developers are hard at work making future extensions based on their (recently rebooted) Jetpack (SDK) project able to update without a browser restart, just like Greasemonkey scripts or Google Chrome extensions can. Any year now.

2010-01-13

$Revision$ keyword expansion with git-svn

I just ended up needing to do subversion keyword expansion on a bunch of files in a subversion repository I track with git, but git-svn doesn't do that kind of thing. So I hacked up a tiny shell script I called git-svn-keyword-expand, which looks like this:
#! /bin/zsh
typeset -A revs
files=($@)
for n in {1..$#files} ; do
f=$files[$n]
h=$(git log -n 1 --pretty=format:%H -- $f)
revs[$h]=${revs[$h]-$(git svn find-rev $h)}
perl -pi -e 's/\$Revision[ :\d]*\$/\$Revision: '${revs[$h]}' \$/gp' $f
done

Just feed it a list of files, and it'll update them in place to whatever current revision they are at. It's a rather dumb tool, which doesn't cross reference the file list with which of those files actually have an svn:keywords property including Revision in it, and it doesn't do any other keywords, but people that want that kind of thing might at least grab this as a starting point. Cheers!

2010-01-12

Current state of HTML5 storage events

In attempting to put HTML5 storage to use for cross-window configuration synchronization purposes, I just performed some tests on what works how in different browsers, and how it relates to the not-yet-frozen HTML5 specs, which at present declare that storage events should work like this:

On changing a value in localStorage (running localStorage.setItem(), localStorage.removeItem(), or localStorage.clear(), and presumably their javascript setter / deletion conveinence API:s, where supported), an e.type === "storage" event should be fired in all other windows open to the same domain (e here being the event object seen by the event listener). (No specific mention forbids firing the same event in the same window, too, but it seems to be the intent to avoid that, to me, and is also the behaviour I would prefer myself.)

When an assignment (localStorage.setItem()), the e.key property should be the name of the key, the e.oldValue its former value, and e.newValue its new value.

When a deletion (localStorage.removeItem()), e.newValue should instead be the null value.

When a clear all event (localStorage.clear()), e.key, e.oldValue and e.newValue should all be the null value.

No browser seems to do quite that yet, but Safari and Chrome are at least rather close. My tests have covered Firefox 3.5.7, Safari 4.0.4 (6531.21.10), WebKit 4.0.4 (6531.21.10, r53119), Google Chrome 4.0.249.49 (35163) beta, Chromium 4.0.267.0 (34084), all on MacOS X (10.6). They all ignore which window triggered the storage event (= fire the event in all applicable windows).

Firefox, sadly, never provides either of the three properties mentioned above.

Chrome, Chromium, Safari and WebKit all provide all three, but instead of setting the e.key to null for the clear event, it gets set to the empty string. The rest works beautifully, though.

Whether the clear event fires when clearing an already empty localStorage varies a little -- Firefox and Safari will, the others don't.

Neither browser fires events when deleting an already deleted value, but Firefox fires events when setting a property to the same value it already had (the others don't).

I spent a minimal amount of time peeking at what IE8 does and how, but didn't get it working. It supposedly supports at least the localStorage function-calling API, according to MSDN. The tests above may miss something subtle, maybe with how to use attachEvent instead of the W3C standard addEventListener (don't ask me why they still don't support that).