2006-03-09

FireBug: a web dev productivity boost

This morning I started out looking for bits on how to put FireBug to best use, cutting needless pain and labour out of javascript web development. It's a well crafted tool by Joe Hewitt (of Document Inspector fame), which I missed by a few moments last time I browsed Firefox extensions.

If not quite a drop-in replacement, it borrows enough ideas from the Aardvark (previously featured here), Console² (showcased here) and View Rendered Source Chart extensions, melding them into one very tidy and well organized web development power tool, picking up additional good inspiration from the Mochikit Javascript Interpreter. (I'd love to see something like Console Filter added to the mix, but hey, it's still alpha.)

To the naked eye, FireBug is first and foremost as visually tidy and to the point as typical Mac appliances, neatly concealing the power it packs, rather than packing its interface with featuritis. You get one console view per window, with just the errors, warnings and XMLHttpRequests related to it, initially hidden behind a green (prior to the first encountered error condition) checkmark in the status panel, or a red one reporting the number of errors registered. Click it, to see the errors, as in the common Firefox error console (formerly named "javascript console"). Or maybe somewhat tidier. Click an erring file name/line number, and see the culprit open up in a new window, focused on the line.

Curious about XMLHttpRequest traffic? Each request gets a line of its own, where you can quickly peek at sent data, received data, and even browse through the properties of the XMLHttpRequest object, a mere click away. Great for a no-effort in-depth view of AJAX applications' chatter with the host server.

And finally, the console command-line:ish input field, where you can instantly apply javascript to the live page, as with Jesse Ruderman's javascript shell.

This bit was what I had hoped to get some more input on, as I had observed this tiny peculiarity: while the current scope is the window object, there is some pollution in the unqualified namespace -- the $ function, for instance, is not the window.$ function (even when defined), but what appears to be some code internal to FireBug:
>>> $
function (id) {
var doc = FireBug.currentLog.window.document;
return doc.getElementById(id);
}
Unfortunately it doesn't work in this context though, so it is probably an unintentional bug that this ever got exposed in the first place.

But imagine the splendour of having a drop-in Mochikit, or Dojo, or your own favourite unobtrusive javascript framework, instantly accessable for live debugging of any page, regardless of page prerequisites, right in this console. ("Unobtrusive", because something like Prototype that affects the page universe would most likely add more debugging confusion than convenience, seriously breaking WYSIWYG.) A possibility I'd love to see in coming versions.

2006-03-05

E4X and the DOM

I haven't covered E4X (short for ECMAScript for XML; ECMA-357 specification) much here yet, but I have been experimenting with it for a while now in Greasemonkey scripts, and got to a point where I feel I have some findings to contribute.

I'm not going to go into details about the splendour of the E4X design nor explain basic concepts; Jon Udell gave a short introduction in September 2004, and I'll be referring to a few other good articles about it later in this post too. Put short, though, it makes XML nodes or trees first class objects (just like numbers, strings or RegExps), using XML as the literal syntax, and adds terse, readable and expressive syntax to perform various slicing and dicing operations on these objects sharing many common traits with XPath.

On a ranty side note, it's what the DOM APIs should have been in the first place, had they not been plagued by Javaisms such as naming the most basic and frequently used method document.getElementById, for a whopping 23 letter name. People who write I18N and L10N for Internationalization and Localization should not use the DOM. (Or set themselves up with d21d(), d27e(), and so on, aliases.) John Schneier makes a less emotional comparison between XSLT, DOM and E4X in Native XML Scripting with E4X, proceedings of the XML 2005 conference and sums up his conclusions in corporate speak too at the end. Again, put short: E4X is about productivity and readability.

As those of you who have been following the E4X field might know, there has been some support for it in Firefox for quite a while now (Kurt Cagle describes some basics in June, 2005), and returns in a later presentation, Advanced Javascript (subtitled "E4X in Firefox 1.5"), of which I'd like to quote the killer misfeature of the current state of affairs:

Object created is NOT a DOM Node, but an E4X node.

Which means that while E4X nodes are first class objects, that doesn't mean you can pass them to the DOM APIs; no node.appendChild( <img src={url}/> ) yet. (But had it worked, that code would have been a drop-in replacement for var img = document.createElement('img'); img.src = url; node.appendChild( img ); -- expressive indeed!) ...So while you can do lots of really nifty XML operations without resorting to messing with XPath through clunky DOM APIs, before you inject the results anywhere, it's falling back to the old and ugly node.innerHTML = e4x.toXMLString() injecting by string representation. Eww.

Or maybe not.

I sent out a plea for help to the Greasemonkey list, and some time later encountered a resourceful post by Mor Roses, where he tossed up an importNode method that translates E4X nodes to DOM nodes for a specific document object. Here is my take on it:
function importNode( e4x, doc )
{
var me = importNode, xhtml, domTree, importMe;
me.Const = me.Const || { mimeType: 'text/xml' };
me.Static = me.Static || {};
me.Static.parser = me.Static.parser || new DOMParser;
xhtml = <testing xmlns="http://www.w3.org/1999/xhtml" />;
xhtml.test = e4x;
domTree = me.Static.parser.parseFromString( xhtml.toXMLString(),
me.Const.mimeType );
importMe = domTree.documentElement.firstChild;
while( importMe && importMe.nodeType != 1 )
importMe = importMe.nextSibling;
if( !doc ) doc = document;
return importMe ? doc.importNode( importMe, true ) : null;
}
To make it more pragmatically useful, I tossed up two helper methods, appendTo and setContent, both of which take an E4X structure and a target node parameters, and injects your XML at the end of the node. The latter method, in addition, starts by removing any prior contents of the node:
function appendTo( e4x, node, doc )
{
return node.appendChild( importNode( e4x, doc || node.ownerDocument ) );
}

function setContent( e4x, node )
{
while( node.firstChild )
node.removeChild( node.firstChild );
appendTo( e4x, node );
}
So it's not node.appendChild( <img src={url}/> ), but appendTo( <img src={url}/>, node ). (Prototype fans may of course opt to add these methods to Node.prototype instead, laughing potential naming collisions with external libraries in the face, that aspect being an inherent feature or plague of the language design.)

For a real-world code example, I'm making extensive use of this in my recent Mark my links tool (version 1.7 source code).

Greasemonkey script writers out there might want to know that it is not a perfect translation, though useful for most purposes -- the tagName property of the resulting nodes are not upper case, the way they for some reason are in HTML documents, so if you want to play under the radar of target page code looking for i e IMG elements, you might need to perform some additional trickery. I just kludged a case of that using unsafeWindow.Image.prototype.__defineGetter__( 'tagName', function(){ return 'IMG' } ) -- I'm sure there are nicer ways too.

For some reason, I don't see any reports of parse errors in scripts where the E4X literals contain malformed XML though, but rather get plain non-functional scripts, which seriously hurts debugging. I have yet to find out whether it's due to some flaw of Mozilla core, Greasemonkey or my local firefox installation. Somehow I suspect the latter most; let's hope I'm right about that.

2006-03-04

Recent peek-arounds

  • Jeff Schiller has reworked a second iteration of his pretty SVG web stats (my prior article about his first go) which now sports live draggable time ranges for instantly showing visitor browser distributions during the period of choice. Statistics visualizers all around: borrow inspiration from the master.

  • Stefan Gössner (whom I have somehow not noticed before) keeps an interesting blog with a scope touching various ecmascript tech about as much as I do here, occasionally posting gems such as Slideous, another HTML slide show inspired by Eric Meyer's S5 and Dave Raggett's Html Slidy. It seems all really good web developers occasionally doubling as speakers design their slideshow tools; here's Aaron Boodman's take. (Maybe I should opt not to give talks? ;-)

  • I published another sneak "unstable" 1.7 release of Mark my links earlier today which addresses the issue of (un)foldable sections in pages, which left favicons hanging somewhere in mid-page in prior versions of the script. I was hoping to attach a hook on the DOMSubtreeModified event, but it appears Firefox does not yet support that, so I took to listening in on click events instead, hoping to catch the user interactions that makes pages change layout.

    It at least works nice enough for GMail and the recent conversations view at coComment, if a bit slow. If anyone has a pointer to the method of quering for browser DOM capabilities, give a shout, and I'll update the script accordingly to silently start doing it right in future Firefoxen. The reason I'm not calling 1.7 a stable release, by the way, is that it seems to have more trouble picking up favicons than 1.6 did. Maybe it's just net fluctuations from my ISP, though, but as I moved around lots of code and tidied it up measurably, I'll put a new stable on hold for a bit. You can always downgrade, if you see similar results.
You may incidentally listen in on what I chat with people about on the web over at coComment. I have still not quite decided on whether (and how) to integrate that here, but for not it can wait until they have native JSONP support and doesn't filter basic HTML. Until then I'll stick to double posting CommentBlogging lead-ins to Del.icio.us too, which recently did get native JSONP support. Besides, I'm still behind on setting up a sidebar pane for related sites such as those mentioned above.