@require
feature of the svn development version of Greasemonkey (the equivalent of #include
in C/C++, so if you exert yourself, you can paste together the scripts by hand by inlining the code at the top of the script). Pardon the nuisance; at least it won't ruin this post for you.I have taken another stab at stigmergy computing, or writing scripts that communicate by changing common environment, here the web page they execute in.
Anyway, last time, I wanted a decent portable web photo album browser; functionality that would follow me around irregardless of where I roam on the web. Something to flip through images with the arrow keys, perhaps with some additional handy features too.
The album browser would consume the very basic microformat
<a href="*.image extension">
, and turn it into a gallery. This meant that I could use my familiar album browser mostly everywhere without any work at all. It also meant that, when encountering some photo site working differently, I could adapt not my album to the site, but the site to my album. A minimal script that changes a single site only very slightly, is trivial -- adapting old code rarely is. So this little ecology convened around a minimal ad-hoc image album microformat. This time I have several microformat consumers convening around a microformat I came up with for destructuring pagination into what is popularly (if somewhat incorrectly) called endless scrolling; a page that, when you approach the end of it, loads another pageful or so by means of XMLHttpRequest. Usually an abomination, when some site designer forces that browsing mode for you, but ever so handy at times, if you only got to choose for yourself.
Anyway, I started off basing it on the old
<link rel="next">
tag, which is actually standardized, hoping that I would, at least once in a while, not need to produce this bit of the microformat myself, but could ride on data in the web page itself. So far, that has not saved me any work but cluttered the microformat into picking next links in two orthogonal ways, depending on what is available, so I'm inclined to write that off as a bad, yet well-intended, idea (see my first reply below for the messy details of how standards bloat like this typically come about).I needed three data items: an url to the next page in the sequence, an XPath expression that matches the items on every page, and an (optional) XPath expression that identifies a pagination pane to update with the last page loaded so far. I came up with three meta tags, named "next-xpath", "items-xpath" and "pagination-container". If the web page gave me those, my pagination script could deal only with undoing pagination, and care less about whether the data it actually handles is Flickr photos, vBullentin forum posts, Google or Yahoo search results, or something else entirely.
After doing some crazy limbo (which deserves a post of its own) loading the next page into a hidden iframe (both for letting Greasemonkey scripts producing the microformat run, and to get a correct DOM for the page to run the XPaths -- as it turns out that, amazing as it may sound, browsers can't turn an html string into a DOM without ruining the html, head and body tags; at least not Firefox 2 and Opera 9), the remainder of the script was a walk in the park.
Having written up a few microformat producers for various sites to exercise the microformat and see if it addresses the problems that show up well enough, found and straightened out some minor quirks, I felt the urge to do more with this smallish, but oh, so useful, microformat, telling apart content from non-content in web pages, so I decided to hook up my custom keybindings script with this hack, making it assign "p" and "m" to scrolling up and down among the items of these pages. That came down to a whopping ten new lines of code, to make a dozen, or, factoring in future microformat producers, an unbounded number of sites scrollable in that way.
And -- best saved for last -- both microformat consumer scripts coexist happily, independently and work orthogonally; the unpaginated whatevers that get added to the end of the page are browsable with the new keyboard bindings, so you can work mostly any web page as comfortably as ever your feed, mail or whatnot reader, right there.
If you craft any microformat producer or microformat consumer user scripts, please tag them as such on userscripts.org, and also tag them with the name or names of the microformat(s) they handle too (I picked, somewhat arbitrarily, photo album microformat and pagination microformat for mine), so they form findable microformat ecologies for other users. Any site your microformat gets a producer for, improves the value of all the microformat consumers of that format, and any microformat consumer using the format in some useful way, improves the value of all the producers.
It is very rewarding doing stigmergy / ecology programming; the sense of organics and growth is very palpable. There are some sharp corners to overcome, and Greasemonkey could help out more, but I'll have to get back to that at some later time.
Do any sites use your proprietary metatags by default? No? Then how does generating that help you over generating a|link[@rel~=next] ?
ReplyDeleteWhy not use the standard for the applicable value and the METAs for the xpath, etc?
They rather certainly don't, no. I think that your first gut reaction and mine was/is the same: for the sub-problem of marking up the next url, link[@rel~=next] is a perfect fit, and should be used.
ReplyDeleteThat may indeed be true, and it is also what I presently do, both on the format producing and consuming sides, and time might even prove it a healthy design choice. First, I thought I would always get by on link[@rel~=next] aided by two meta tags, but then I found a possible, yet highly unlikely, case where I also needed the XPath expression used to find the next link. Half-catering that case may be over-engineering (says my gut feeling), but when choosing between a minimal, 100% new and 100% orthogonal microformat of three xpath meta tags only, and the 75% new one I implemented with two meta tags and a link, which, in an odd case, needs a third meta tag instead of (or beside) the link, instinct tells me the first one that caters all cases is the better design. The problematic case?
If page N is on domain A and page N+1 is on domain B, the same domain barrier prevents me from scooping out whatever data the microformat producers may have leveraged page N+1 with, yet due to some amazing stunts I pull in my wget lib, I can still XPath scan it, and to find the next link, I then still need the XPath with which to do it, and that XPath must be known already on page N, so I can scoop up the next link myself, rather than delegating to producers to do it for me.
The present implementation does not make either choice, so you could consider it implementing both the minimal 3-meta standard, the also minimal yet only 95%(?) capable 2-meta+link[@rel~=next] standard, and the maximal, yet overlapping / bloated 3+1 standard. Producers of all three kinds (I think mine are all of the latter kind, though due to an oversight in one of my mini libs, were of the second kind the first hour after publishing), but nothing prevents 3rd parties from doing the minimal work only.
This is the slippery slope standards walk on their way to bloat and over-engineering that is mostly counter-productive and leads to the ugly cases where incomplete implementations (say, the 2+1 case, faced with the ugly cross domain problem) lead to complex breakage. I've seen it happen before, hence my realizing that I might actually prefer the 3-meta standard myself, despite not leveraging a good present standard practice (and the fact that I have no other consumers for the link[@rel~=next] standard at present subtracts from the potential value I might have of producing it, just in case, paying with slight producer bloat).
(That lesson about standards may be just as valuable as the other bits I describe in the post -- possibly even more so.)