2009-06-19

Updating Firefox extensions from 3.0 to 3.5

When I started writing this post, two weeks ago, 46% of the top 95% most used Firefox extensions on AMO (according to Firefox Add-on Compatibility Report) still did not manifest 3.5 support. The Mozilla team has been checking in with at least the topmost few of those projects (being one of the Greasemonkey maintainers, I got one of those pings, a few weeks ago), about whether we were having any issues upping the max-version of their extension to 3.5. We had not tried, at the time, but (happily) did not see any big issues, so we just let the release already out claim 3.5.* working.

For other extensions, like MashLogic which I hack on at work, that transition was not as trivial. Our integration surface towards Mozilla being larger than Greasemonkey's, more things changing underfoot get a shot at breaking things, and did.

The information about backwards incompatible changes available at MDC was and is still a bit sketchy, and did not cover issues we were hitting, at the time -- it seemed XUL documents were getting XPCNativeWrapper treatment in 3.5, that they did not have in 3.0, so we had to start adopting Greasemonkey-like roundabout practices to read and write properties of the XUL DOM. That's the kind of information that is good to be able to read about -- and there, specifically.

It's a wiki, though, so if you find something that isn't already there, while you are scratching your head about why things are not fine and dandy in 3.5 (due rather soon), please add your findings too. I did a brief writeup on my own findings.

For MashLogic, we actually ended up removing the xul page we had used for configuration to do the same thing in HTML instead; work we had already mostly done for our IE port, anyway. Other tidbits remain here and there, though, before we can claim having fully working functionality under 3.5.

On top of that, there is the not so enjoyable multi-week delay at AMO, getting our latest release through their review system. Our current release is thus so far only available if you look at their see all versions page, and grab the so-called "experimental" build on top. User reviews seem to help get the ball rolling a little, though, so if you try it out, we're really glad if you share your opinion with other users there by writing a review.

Whether you are a user or extension developer, sharing your experience like this really helps others. Especially if you do it where others go looking for it. MDC and AMO are rather good funnels for developer and user centric Firefox extension related things, and the more we use both, the better they get. Thanks for your help!

2009-06-06

Greasemonkey and bottom-up user scripting

I reread Paul Graham's essay Programming Bottom-Up yesternight, and mused for a while about a set of ideas Olivier Cornu launched on Greasemonkey-dev around Christmas last year, about offering user scripts some functionality to cooperate with each other. It is, I realize now, the same territory.

That initial message quickly grew to astronomous proportions as we developed the territory (him and I sharing similar experiences of wanting user scripts to be able to share functionality with each other), trying to come up with how, while also selling the idea to remaining Greasemonkey maintainers Aaron and Anthony, neither of which wrote huge site specific user script applications. I will not attempt to sum up much of the proposals or their implementation, but Olivier's fork of the project has the latter, in what state it reached at the time, plus a healthy bit of refactoring and polishing of corners here and there, in case someone is curious.

What I will do in this post, however, is to transplant Graham's post to the domain of web development in general, and user scripting in particular -- moving it into the language domain of Javascript and the implementation domain of web browsers in general, and, lastly, user script programming under Greasemonkey in particular. A few steps away from Lisp and web server applications, but where all the same principles still apply. (The remainder of this post will not make as much sense if you haven't read Graham's Programming Bottom-Up.)

In client side web development, we face diverse host environments where most specified APIs work (and break) subtly different, dependent on the user's browser. The largely adopted solution to that has been the breadth of ajax libraries such as jQuery, YUI, Dojo, Prototype, Base2 and others. To the client side web world these are the first step towards changing the language to suit the problem: bridging the (largely unwebby) DOM API into something more tractable.

Thus your javascript programs become about implementing sought functionality, more than about jumping the hoops of the browser environment.

User scripting, by comparison, is a much smaller domain in terms of body of programmers that do much of their work there, and in terms of ajax library coverage. Greasemonkey, probably still its biggest sub class, in addition to the hoops of the web, adds all the hoops of the Mozilla javascript sandbox, which is an even more hostile environment than the web: there are XPCNativeWrappers around your DOM nodes and lots of other gotchas to watch out for.

And due to that added baggage on top, you will likely find that most or all of the ajax libraries subtly break here and there, if invoked as you would any other code under Greasemonkey, and/or need slight changes here and there in order to work as intended. So writing user scripts for this environment, becomes even more in need of libraries if you aim to do anything beyond very tiny hacks, or you waste dev time jumping the hoops.

Greasemonkey got @require support last year, to give some syntactic sugar for inlining code from a remote URL at installation time, letting us at least hide away them from the main body of code, which was a slight blessing. But that only does install-time composition.

This is where Olivier's work comes in, letting scripts export API methods, usable from other user scripts -- effectively acting as a middleware between a web page or site and the other user script. This is yet another step towards changing the language to suit the problem, or more precisely still, changing the provisions of the host environment to suit the problem.

Because not only do Greasemonkey user scripts suffer the penalties of normal client side development and the traction penalties of the Mozilla javascript sandbox, but also the constraints of working with whole web sites (where data or functionality needed by a user script could be spread across different urls) through the eyes of one single web page at a time. When you develop a browser helper user script for GMail, a dating site, some web forum, game or similar, some of the functionality you want to add on one page might require keeping track of data only found at other pages.

That quickly becomes a great endeavour keeping track of in a user script, meaning that you will spend more time jumping through hoops and wasting more code in your script on building and maintaining site interfacing code than the function of the script itself. The host environment starts invading your core logic, again, leading you off what your script is really about.

Attempts at bridging that with @require will force you (read: your script's user base) to reinstall every script you use that library in every time something on the site has changed that requires a fix in the library. Slight improvement, but we could do better.

What Olivier and I wanted (and evolved through much discussion, together with all the other smart voices on the dev list) was a system that would let a user script abstract away that functionality for other scripts, maintaining rules for which pages to run on, keeping local storage for whatever state info it see fit and so on, exporting some abstract API to third party scripts that in turn want to focus solely on site related functionality, and not on site plumbing.

It proved staggeringly difficult to reach consensus about the soundness of these ideas on-list, though two or three of four or five had a very good grasp of what value they would bring to user script development. I think we may soon see a somewhat different take on how the Greasemonkey project evolves and is maintained. I hope we will keep about the same track record of stability and security in what we release to you at addons.mozilla.org.

I also hope we might make it easier to bring about new tooling like this in the future.

2009-06-01

XPath bookmarks

A few years ago, annoyed with web pages that don't offer in-page permalinks where they ought to, I wrote a tiny XPath bookmark user script to solve half of the problem -- scrolling to the location in the document to which a URL fragment formatted as #xpath:some-xpath-expression was pointing, when visiting any such URL.

I didn't end up using those URLs much in practice, largely because it was a bit of a hassle crafting those XPath expressions, and to some extent because they wouldn't do much for people that did not run that user script too.

At some later point in time, I seem to have realized that I didn't, and that my custom keyboard bindings script, which I mainly use to scroll search results and other (typically paginated) similarly item-oriented pages, one item at a time (via m/p -- don't ask :-), knowing about XPath expressions for the stuff it was navigating, could easily offer the feature of creating those XPath bookmarks for me. [1]

So I apparently implemented that feature, called it ? and promptly forgot all about it. Needless to say, I still didn't use them any more than I used to before.

Today, I realized that my (forever growing) fold comments script knows a thing or two about XPath expressions to stuff in a page (visitor comments, typically) and ought to expose that to this magic bookmark creation script. Especially since it is typically bookmarking some specific comment I wanted, anyway. It at the same time dawned on me that if I just let it share this XPath expression referencing comments (that it would fold) with the items-xpath meta tag of the pagination microformat that the above keyboard bindings script groks, and that script, in turn, always updated the URL when scrolling to a new place in the document, my primary use case would be covered for most of the sites I frequent a lot, at least.

I was quite bemused finding that I had implemented the ? thing, and in short work updated it to do that automatically. So, at long last, setting XPath bookmarks to in-page items of opaque, permalink deprived web pages, is a breeze. I wonder how much they will end up being used. As I never even blogged about the XPath bookmark resolver script in the first place, it hasn't picked up any mentionable user base (245 installs at present time of writing). I think it is a feature it would do the web a whole lot of good, if something like it gained traction and widespread adoption.

Not all XPath expressions make good bookmarks, of course, on web pages that change over time, but very many do, and for static content especially, it's pretty much a guarantee -- even in the face of stuff like CSS layout changes, just as permalinks do.


[1] Zipping through template-formatted items one item at a time with some keyboard command, so your eyes do not have to shift focal point, and can focus on just the changing content (minus static template administrativia, which suddenly cognitively completely disappears from view), is a positively huge UI improvement over either of the normal practices of reading a little, scrolling a little, and reading a little more, or skimming a page, hitting page down, reorienting yourself, and repeating the procedure.

It gets even worse with paginated web pages like search results, where, in addition to that, you have to click "Next" and wait for a moment somewhere in the work flow. Fixing this problem by pushing "endless pages" (that automatically load another pageful of content once you scroll to the end of the page) onto all your visitors, is the wrong way; many will hate you for it. As for my own browsing, I sooner or later tend to end up adding a pagination microformat producer script (if a site already has those meta tags present with correct XPath expressions in them, I of course would not need to do so myself) for my unpaginate pagination microformated web pages script.

Those scripts work well in unison, so I can key down an item at a time for pages upon pages, at worst having to wait for a moment here or there while next page is loading.