Death and IE

This post is a brief celebration of succession on the web. Not Mosaic begat Netscape begat Mozilla begat Firefox succession, but Firefox 4 begat Firefox 5 begat Firefox 6 begat Firefox 7 and Chrome N begate Chrome N+1 type succession. Google got it right with Chrome last decade, and not long thereafter, Mozilla got it too this year.

Quoting that post's quote of Steve Jobs' 2005 Stanford commencement address (worth watching, if you haven't),
Death is very likely the single best invention of Life. It is Life’s change agent. It clears out the old to make way for the new. Right now the new is you, but someday not too long from now, you will gradually become the old and be cleared away.

This very strongly applies to web browsers, and as it turns out, the best thing a browser version can do (besides getting things right in the first place) is dying even more quickly than it came. I wonder if this enlightened insight might have sprung with Mozilla, if only Firefox could have kept its inaugural name, Phoenix, which it now fully embodies, but at least now, this gift has finally been given to web developers:
You need not bother writing applications (or perfecting layouts) to high-fidelity for old browsers, for their time is short and their better ancestors replace them quickly.

Assuming your deployment domain is web pages. As browser add-ons go, if you still maintain one of the old style XPI design your work burden has shot through the roof instead, unless you have near zero UI footprint, happen to only need and use APIs that time proves to remain stable and host it on addons.mozilla.org, where they bump its maxVersion every few weeks for you when it still seems to work. Failing either, you have to manually update and test it seven times per year. In Chrome, this has premeditatedly been addressed by not promiscuously offering any APIs it can't support in the long run, at the cost of limiting how much add-ons can do. Firefox, in this regard, remains the only browser where add-on authors can innovate 100% of the browser, and in this regard it has filled an important need in the browser eco-system (and still does). It is both its greatest feature and its greatest burden. Hopefully this will be mitigated to some extent with Jetpack and the new add-on builder.

Returning to the paramount topic of graceful, benevolent, rapid-evolution-supportive death, Microsoft's IE does not yet get it. Like a third world nation befallen by sudden prosperity, it has doubled its reproduction rate while keeping its mortality rate constant. As noted in mentioned posts, this does not bring just better browsers faster, it brings over-population, and makes web development unsustainable. Or, as Paul Irish put it in the first post: it pollutes the browser market.

In this respect, the conditional comments and many "backwards compatibility simulation" modes IE bring you are not mainly tools helping web developers make sites work on Internet Explorer, but a huge extra burden of work forced onto web developers, to cope with IE's broken release process. This is Microsoft's job, not ours, and we should be outraged with them for forcing it on us.

It is not the rotting corpses of IE6, IE7, IE8 or IE9 that all need to die to give space to IE10, which still smells fresh, it is a broken release process that needs to get addressed and brought up to date with best browser practices of the decade. It is the reinvention of sudden death, not just the gift of new life, that must come to Redmond, too. I applaud the IE team for making it their business to get up to speed with the web's evolution, but it's less the whats than the hows that are important to get right now. SVG is great, but sort out the release process before taking on, say, webGL. It can wait. Fixing the world's hard problems like over-population is harder than running after the latest ooo-shiny! - but the alternative is systemic collapse.

For a browser, it is better to live a great but short life and go out with a boom, than it is to burden its extended family with a never-ending old age in an insufferable early-set rigor mortis. However you feel about Steve Jobs he lived and died in this way, never holding back, never growing stale of mind nor action, and the world was better off for it.


Running an old rails 2.3.8 with rvm

I was helping set up a local (legacy) rails 2.3.8 server on a macbook today, autonomous from the system ruby installation. This was a bit messy, as modern versions of rubygems conflict with the old rails 2.3.8 environment to the tune of:
Uninitialized constant ActiveSupport::Dependencies::Mutex (NameError)
...when you try to start the server. Here's the recipe I came up with:
  1. Install rvm.
  2. # Install ruby 1.8.7 and downgrade its rubygems to 1.5.3:
    rvm install 1.8.7 && \
      rvm use 1.8.7 && \
      rvm gem install -v 1.4.2 rubygems-update && \
      rvm gem update --system 1.4.2 && \
      update_rubygems && \
      echo 'Okay.'
  3. # Install all the gems you need, for instance:
    rvm gem install -v 2.3.8 rails && \
      rvm gem install -v 3.0 haml && \
      rvm gem install -v 2.1 authlogic && \
      rvm gem install -v 1.0 dalli && \
      rvm gem install -v 0.2 omniauth && \
      rvm gem install -v 2.7.0 erubis && \
      rvm gem install -v 1.3.3 sqlite3 && \
      echo 'Gems installed!'
  4. If needed, run rake db:setup in your rails tree to set up its databases.
  5. Done! rails/script/server -p your_port is ready for action.


Optimized SVGs at gist.github.com

Lately, I've been having a lot of fun hand optimizing SVG files for size, a bit like Sam Ruby does for his blog (it is highly instructive peeking at his collection, as I think I have mentioned before).

For me, SVG has something of the magical flair I first found in HTML in the nineties, back when it was the new bleeding edge New Thing, but I argue that it's even more fun than HTML was. The W3C SVG specs are not prohibitively difficult to read, and of course you have much greater graphical freedom than structured documents can afford you (duh!).

Like Sam, I try for something presentable in a kilobyte or less (uncompressed, though modern good browsers are as happy to render SVG:s delivered with content-encoding: gzip, of course, as long as they are otherwise correct and delivered with an image/svg or image/svg+xml content-type), and to never enforce a fix width in the svg tag itself – so they just beautifully grow to any size you want them to be, with no loss of quality.

Which is the other main beauty of SVG, besides being fly-weight, standardized, widely supported and still growing broader support in all the main-stream browsers. Over the last few years, the SVG front has been progressing happily and now is very practically useful already, for at least those of us that care most about Chrome, Firefox and Opera (I get the perception that Opera's often rather exceptional lead on the SVG support front is largely or even solely the work of Erik Dahlström, but I might exaggerate a bit).

[I thought what I'd do was, I'd pretend I was one of those deaf-mutes] (<- this is text content in an inline SVG, if your browser or reader has stripped off the SVG or can't render such modernisms :-) Anyway, this weekend, I had fun turning the Laughing Man (from Ghost in the Shell: Stand Alone Complex) that @elmex vectorized at some point, into a 1000 byte version of my own, also featuring the gentle rotating text seen in the anime series (YouTube), via declarative animation (so there is no javascript involved here).

Edit: I initially missed an excellent opportunity here to plug Jeff Schiller's scour, which is an ideal first step when you start from an SVG source file. Be sure to run with -p something-large, as its defaults are being lossy about precision, which cuts needed decimals from some input files. With -p 99 you'll be on the safe side. Experiment with low single-digit numbers if you like (the current default – 5 – is often good), but make sure things are still looking right, or you may just ruin your file, rather than optimizing it for tiny foot-print. Broken images don't get extra credit for also being small!

The result is in this gist (if you install this user script, you can swap between *.svg text/plain source code image/svg rendition) or to the left, if your browser renders inline SVG:s properly in HTML content (I recommend the gist over view source, as Blogger inserts linebreak HTML tags unless I strip out all newlines first).

What surprised me, when I made this user script, is how far standards support has come in modern browsers: in order to inject the <svg> tag I create for the rendered version, I had to feed the source through DOMParser (as setting .innerHTML is lossy from treating the SVG content as text/html, not text/xml), and the few lines doing that, just magically worked in all of Chrome, Firefox 4 and Opera 11 (de-jQuery:fied, to make more sense outside of the script's context) with no special extra effort on my part:
// turn the raw SVG source string into an XML document:
svg = (new DOMParser).parseFromString(svg, 'text/xml');

// import it into an SVGSVGElement in this document:
svg = document.importNode(svg.documentElement, true);

// and insert that element somewhere in the document:
To me, that's a rather clear sign SVG is ready for prime time now.

While github reports having no plans on serving *.svg gists with an image content-type (they don't want people using gists for image hosting, I guess, even though it's sad you can't easily preview without saving to disk or using my hack above) I still think the light-weight gist community oriented sharing is good for this kind of thing. Others happily forked the octocat SVG I similarly format converted a while ago from the github About page, and milligramme made this much spacier version.

I gather all my SVG play in a svg-cleanups repository on github, if anyone wants to get inspired the fork or follow way, and occasionally tweet about it. If you find this kind of exercise as much fun, I love hearing about it; here, on Twitter, github, or elsewhere. I believe it's good teaching and learning for the web as a whole, too. Any logos, trademarks and the like above are property of their respective owners.


Draw your own Github SVGs, step by step

I SVG:ified and played a little further with the logo material from the recently published Github about page, and then tonight I figured it would be fun to visualize the elegant process by which a raw SVG image is built up, piece by piece, by rather basic building blocks. With just a little bit of javascript magic to help you, here is how you piece together your own github schwag from scratch (works like a charm in Chrome, Firefox 4, and presumably any other modern browser that can handle inline SVG images):

If all you see is a button and the description of each step, that just means your browser doesn't natively handle inline SVG, which of course is a bit of a shame. Another reason I was curious to try this is for seeing how inline SVG:s fare in feed readers. (Google Reader seems to fail.)

Nicer source code for the images than in the page (Blogger insisted on filling all the whitespace with html junk) in these gists: github-logo.svg, octocat.svg. gist.github.com currently doesn't serve .svg files as image/svg, so you'll have to save them locally first to see them rendered.

Source code for the step-by-step drawing is in this gist; MIT licensed, if you want to fork away, adopt, adapt or whatnot. Have fun! I did. :-)


Add-on hosting: Mozilla vs Google vs Opera

I habitually develop browser user scripts to stream-line things for myself (and others) on the web – and a half random scattering of them tends to end up proper add-ons, when I think the benefit of them being easy to find, install and reuse by others merits the added work for myself of packaging them and submitting them to an add-ons gallery.

This happens rarely enough for me to forget some details of the process (yet often enough to be annoying), hence this post to document salient parts of it, applaud parts of where hosts worked things out really well, and note where there are holes to patch up. (I don't cover Safari since I have not made any Safari add-ons.)

Firefox add-ons: addons.mozilla.org, a k a AMO

Your add-ons are listed here: addons.mozilla.org/en-US/developers/addons
Add-on URL: addons.mozilla.org/en-US/firefox/addon/your-configurable-addon-slug
Public data: current version, last update time, compatible browser versions, website and, optionally: all add-on detail metrics, if the developer wants to share them (excellent!)
Public metrics: total download count, average rating, number of ratings
Detail metrics: TONS: mainly installation rate and active installed user base over time, broken down by all sorts of interesting properties or in aggregate, graphed and downloadable in csv format. Notable omission: add-on page traffic stats. Public example: Greasemonkey stats
Developer page linked from the public add-on page when you're logged in: NO
Release process: Manual review process that can take a really long time, as AMO often is under-staffed, and hasn't successfully incentivized developer participation in the process to an extent as to make it not so.

Summary: great stats and an ambition to make information public and transparent.

Chrome add-ons: chrome.google.com/webstore, or the Chrome Web Store
Previously lived at chrome.google.com/extensions, the Chrome extensions gallery

Your add-ons are listed here: chrome.google.com/webstore/developer/dashboard
Add-on URL: chrome.google.com/webstore/detail/your-addon-signature[/optional-add-on-slug]
Public data: current version, last update time, if installed: a checkmark and the sign "Installed", instead of the "Install" button (excellent!)
Public metrics: total download count, average rating, number of ratings
Detail metrics: If you jump through the relatively tiny hoop of creating and tying a new Google Analytics account to your add-on, you get detailed add-on page traffic stats, over in Google Analytics. Notabe omission: all metrics about your installed user base :-/
Developer page linked from the public add-on page when you're logged in: NO
Release process: Automated review process, making yourself the only release blocker, in practice. This is bloody awesome!

In summary: A really delightful release process that doesn't waste your time. The metrics part is really disappointingly missing the most interesting data; I hope that will improve with time. There are other essential missing docs too: to find out that you can self host your add-on AND publish it in the chrome web store, with the same add-on id, you need excellent Google fu, or this secret knowledge:

  • first (before your initial web store submission) build your add-on locally
  • store the .pem file in some good location
  • rename it key.pem
  • copy that .pem file into the add-on directory
  • zip up that directory (so the top level of the zip file only has one directory)
  • NOW (and for all later versions, IIRC) upload this zip file to the Chrome Web Store
  • …and it will use your add-on signature, instead of that of Google's secret .pem

Opera add-on page

(slightly de-horizontalized screenshot taken here)

Opera add-ons: addons.opera.com/addons/extensions

Your add-ons are listed here: addons.opera.com/developer (this page is near impossible to find by navigation, and was the reason I created this blog post :-)
Add-on public URL: addons.opera.com/addons/extensions/details/addon-slug[/version (redirects visitors here)]
Add-on developer URL: addons.opera.com/developer/extensions/details/addon-slug/version (no redirect help, but links to all other versions)]
Public data: current version, last update time, add-on size (excellent!)
Public metrics: total download count, average rating, number of ratings
Detail metrics: NONE. Notabe omission: all data about your installed user base, AND add-on page traffic :-(
Developer page linked from the public add-on page when you're logged in: NO
Release process: Manual review process, rewarding the behaviour of publishing as little data about your add-on as possible, since each bit of data is a plausible blocker. (Example: it is a bad thing to link to your development site on github, unless it is a repository that only has Opera-addon-centric stuff in it, and not, say, covers the source code used to build add-ons all browsers it caters.)

Summary: young; still a far cry from easy to find your way around. I accidentally managed to log in via my email address (after having reset the password), which had gotten the same password as the account nickname I had published from before, but was otherwise treated as another account, with its own (empty) list of add-ons. So I first found no way to upgrade my existing add-on and just uploaded a new one. When I found the mistake, there was no way to abort the review process I had started (but I could edit it to add some reviewer notes for the probably equally confused reviewer). What probably looks like a really well integrated set of connected Opera sites to an Opera employee is a huge maze of web pages, none related to publishing your add-on, to an add-on developer.

All of these hosting sites have developer log-in of some sort, yet fail to link the developer's admin view from the public add-on page. For Opera, this hurts really badly, as there is so much Opera noise around, which does not try to help you publish your add-on; for the others, it's a smaller nuisance (it's also entirely possible that I learned where to find the "Developer Hub" and "Developer Dashboard" links long ago enough to now find them near intuitive).

Update: I just found Opera's dashboard link! Your logged-in name is the link that takes you to your dashboard of add-ons. And it only took knowing the wanted url, opening a web console and typing [].slice.call(document.links).filter(function(a){ return a.href == "https://addons.opera.com/developer/"; }) in it. What do you know? :-)


Github Improved 1.7

After a whole year in development as a user script (initial mention, formerly known as "Github: unfold commit history"), a brief test-drive as an Opera extension at the 2010 add-on con, I finished up the missing bits and pieces to make today's Github Improved! chrome extension available in the Chrome Gallery / Web Store (1.7 now also available in the Opera extension gallery).

New since last release is the little tag delta (Δ) links that let you instantly see what happened between, say, Greasemonkey 0.9.5 and the previous release tag (it recognizes anything where the only thing differing between tags are their numeric parts, which also means it's not going to handle fancy 1.6rc1 type tag names littered with semantic non-numeric parts):

And, as prior users may find even more importantly, that it doesn't hog any shifted keyboard shortcuts, which for some reason had the side effect of making arrow up pull in a pageful of diffs.

Full documentation of all features, and a few screenshots, is available on the Chrome Web Store page. I also take every opportunity to mention that it really shines best together with the AutoPatchWork extension, or something equivalent which unfolds the commit history pagination as you scroll off page. Enjoy!


Google Closure Library bookmarklet

Any self-respecting javascript library should have a debug bookmarklet that lets you load it into a page, so you can tinker with it (the library, or the page, for that matter) without any messy overhead like downloading the most recent version, building it with the specific flags and sub-components you want, saving the current page, adding library setup code to it, and reloading that page before you can do squat. I found Google Closure Library a bit lacking in that regard, so here's my take on it:

It has two modes of operation; if all you want to do is load the goog object into the current page (or overwrite the one you already have with one that has a different set of goog.require()s), just customize which those requires should be, and you're set; it creates an iframe, in which it loads the library (goog.require only works during page parsing time, as it uses document.write() to load its dependencies), and then overwrites the top window's goog identifier.

The second mode is good for writing your own bookmarklets making use of some Closure Library tools; provide your own function, and it will instead get called with the freshly loaded goog object, once it's ready. At the moment, I have only played with this in Google Chrome, but feel free to fork away on github, if you tinker in additional improvements for other browsers.

Finally, here's a bookmarklet version to save or play with, which will prompt you for just which closure dependencies you want to load: closure – drag it to your bookmarks toolbar or equivalent if you want to keep it around. Enjoy!

A "No News is Good News" Tsunami Feed

I have subscribed to the NOAA/NWS/West Coast and Alaska Tsunami Warning Center's feed of ahead-of-time tsunami warnings for the U.S. West Coast, Alaska, and British Columbia coastal regions (example information statement) for some time. Most, like this one, are "wolf-cry" statements stemming from some seismic activity somewhere that won't generate any tsunami, marked with the all-important body phrase "The magnitude is such that a tsunami WILL NOT be generated" - meaning I don't really care about them.

Today I made a Yahoo pipe, filtering away those from the feed. Here is the resulting feed, containing only positive tsunami warning statements. Behind the scenes, it's created by the simple YQL statement select * from rss where url="http://wcatwc.arh.noaa.gov/rss/tsunamirss.xml" and description like "%tsunami <strong>WILL NOT</strong> be%" (there's some newline or similar after "be" in the original feed, and it doesn't seem like YQL does any whitespace normalization, but this matches well enough to filter the stuff).

The resulting feed is my enrapturing contribution to today's supposed end of the world. You may note that the feed is currently (and hopefully still, by the time you read this :-) empty - which is, of course, good news. Enjoy!

On a related hacker's note, it would be really handy to extend Google Reader with a "filter this feed" feature that created these more on the fly, without mucking about for an hour in the Yahoo Pipes interface, and then changed the old subscription to the new feed.


Chrome + NaCl + libmodplug + ... = tracker modules on the web

Some notes I took while looking into how to build a libmikmod_x86_32.nexe and libmikmod_x86_64.nexe to get Chrome tracker module playing support from javascript (the work seems to be done already in naclports) via NativeClient, for all the MOD, S3M, XM, IT, 669, AMF, AMS, DBM, DMF, DSM, FAR, MDL, MED, MTM, OKT, PTM, STM, ULT, UMX, MT2 and PSM formats:

  • In a Chrome 11 profile, go to about:flags, and click Enable under "Native Client", and then the "Relaunch Now" button at the bottom (alternatively: start the session with the --enable-nacl command-line flag)
  • Download the NaCl SDK (for Chrome 11, in my case; eventually the ABI will supposedly freeze and cover a wider range of versions)
  • Download depot_tools, extract it and export NACL_SDK_ROOT=$PWD in that directory (technical overview here)
  • Check out naclports
  • Probably optional: comment out all the RunInstallScript lines in naclports/src/packages/nacl-install-all-bitsize.sh except the one you're interested in (in my case: RunInstallScript libmodplug-0.8.7 nacl-libmodplug-0.8.7.sh)
  • Run nacl-install-all-bitsize.sh 64 (or 32 for a 32-bit build), which, in my case, built $NACL_SDK_ROOT/toolchain/mac_x86/nacl64/usr/lib/libmodplug.a
  • About here my research petered out, as I found this smaller hack for XM-only playback by some forthcoming Japanese fellow (git repository here; English translation of same page c/o Google Translate), which has basic functionality
  • If you got curious about my original venue and proceeded further, I'd love to hear about it, especially if you managed to build the final nexe:s; do post a comment!


Github tag and branch labels

I just made a little update to my github improved! user script; now it shows you branch and tag labels in the commits view, like this:

There's been some more mystery meat features slipping in there too somewhat unannounced; if you click a committer icon a little filter panel opens on top that lets you see how many commits in the view were by whom, and if you click one of those, hide those commits. I got the idea when I was playing with Autopatchwork at some point, unpaginating a whole repository's worth of commits and wanted to slice and dice the view a bit, get aggregate stats and the like. This is what Greasemonkey's early history (from 0.8 and back) looks like, in terms of authors (not committers) involved, for example:

Autopatchwork could use some more coverage on the English-speaking web, by the way, because of its neat way of aggregating user data from contributors. Instead of a dedicated backend server someone maintains for the script to work, it's using a wiki-like JSON database for public domain content: Wedata.net; define your json schema and let anyone that wants to fill in, edit and co-maintain the data (here: unpagination url regexps and node xpaths). The editing process looks a bit like this. I am especially glad to have found it, as I've been wanting a service like that for a long time but not really wanted to host it, and having a public cloud sync point for mini-applications like this to update their localStorage copies of the data from is a neat trick.

Anyway, happy githubbing! As with so many little features before it, I already can't fathom going back to a github without this feature (and I've only been using it since yesternight). It helps a lot seeing what went into which release at a glance, without doing all sorts of manual work. It would be nice for it to mark branch-off points too, but it would require some kind of merge-base type api end-point, for digging up where a branch's closest common ancestor is to all other branches. Maybe time for devising a neat response format and crafting a little api feature request.



I made myself a little command-line tool gravatar.rb today, which eats email addresses and outputs "gravatar-url email@address" lines, for all email addresses that has a customized gravatar created.

It's fun to run git log|gravatar for projects you've got checked out; here's yui3:

(People showing up more than once either have multiple email addresses registered with gravatar that they commit via or borrow someone else's picture. :-)


Installing perl modules on Snow Leopard and Homebrew

Preamble: this post is a great read if you really want to have your Perl stuff in /usr/local, instead of in /Library/Perl, the MacOS X way. Otherwise, maybe not so much! (The error I had stumbled on, which made me think cpan didn't just work was, apparently, that I at some point had used sudo cpan, and gotten permissions on things out of whack. If cpan was a little bit more helpful, chances are I would have figured it out sooner -- but let's not toss a blog post that was fun to write and still has a few useful nuggets, despite being built on an otherwise shoddy fundament. :-)

First up: this isn't any more Homebrew specific than that I use the perl Apple ships with Snow Leopard (5.10 at this time of writing, for 10.6.7 - I'm on a Macbook Air myself), with all its built-in DTrace and MacOS integration goodness. Read that fine page for authoritative data; I neither am not pretend to be a perl savant.

Using the system perl is how Homebrew prefers it – unlike some other packaging systems I shall not mention here, as a service to other fine people trying to google for "perl mac snow leopard -this -that". Should you for any reason wish to mention those in comments on this post, you may call them "Funk" and "Warts" (or "MacWarts"). I may delete your comment if you don't. (Nothing personal; just sayin'. And you might as well not, anyway, I am unlikely to be able to help you.)

Anyway, so I want my perl modules installed in /usr/local, and I want them in my @INC when I run /usr/bin/perl and it's a shame that the system perl only looks in any of these, right?

Wrong! (I told you: read this fine manual written by knowledgeable people!) Still, a neat thing they describe is that the MacOS X perl installation will happily read /Library/Perl/5.10.0/AppendToPath and /Library/Perl/5.10.0/PrependToPath and append / prepend any directories they list (one line per path, normal unix style) to @INC. Or, of course, any other version than 5.10.0 – perl -v will tell you what you've got installed. So, as root, I appended /usr/local/lib/perl5/site_perl/ to my AppendToPath.

Next up, assuming you've already made a mess trying to install stuff with cpan (I had), you may want to do what I did, which is rm -rf ~/.cpan (which permanently nukes it; should you want to be able to undo that, you of course instead move it aside somewhere – you're solely responsible for your actions here, as always). Then run cpan. It'll ask you tons of questions you might not know the answer to any better than I did if you say no, or give slightly okay defaults if you say yes. Some frustrating experimentation later, I came down with this recipe, which worked for me; first yes it, and then when you get the prompt, type o conf. It'll probably look a bit like this, if the defaults remain about the same:
cpan[1]> o conf
$CPAN::Config options from '/Users/jhs/.cpan/CPAN/MyConfig.pm':
    commit             [Commit changes to disk]
    defaults           [Reload defaults from disk]
    help               [Short help about 'o conf' usage]
    init               [Interactive setting of all options]

    applypatch         []
    auto_commit        [0]
    build_cache        [100]
    build_dir          [/Users/jhs/.cpan/build]
    build_dir_reuse    [1]
    build_requires_install_policy [ask/yes]
    bzip2              [/usr/bin/bzip2]
    cache_metadata     [1]
    check_sigs         [0]
    colorize_debug     undef
    colorize_output    undef
    colorize_print     undef
    colorize_warn      undef
    commandnumber_in_prompt [1]
    commands_quote     undef
    cpan_home          [/Users/jhs/.cpan]
    curl               [/usr/bin/curl]
    dontload_hash      undef
    dontload_list      undef
    ftp                [/usr/bin/ftp]
    ftp_passive        [1]
    ftp_proxy          []
    getcwd             [cwd]
    gpg                []
    gzip               [/usr/bin/gzip]
    histfile           [/Users/jhs/.cpan/histfile]
    histsize           [100]
    http_proxy         []
    inactivity_timeout [0]
    index_expire       [1]
    inhibit_startup_message [0]
    keep_source_where  [/Users/jhs/.cpan/sources]
    load_module_verbosity [v]
    lynx               []
    make               [/usr/bin/make]
    make_arg           []
    make_install_arg   []
    make_install_make_command [/usr/bin/make]
    makepl_arg         []
    mbuild_arg         []
    mbuild_install_arg []
    mbuild_install_build_command [./Build]
    mbuildpl_arg       []
    ncftp              []
    ncftpget           []
    no_proxy           []
    pager              [/usr/bin/less]
    password           undef
    patch              [/usr/bin/patch]
    prefer_installer   [MB]
    prefs_dir          [/Users/jhs/.cpan/prefs]
    prerequisites_policy [ask]
    proxy_pass         undef
    proxy_user         undef
    randomize_urllist  undef
    scan_cache         [atstart]
    shell              [/bin/zsh]
    show_unparsable_versions [0]
    show_upload_date   [0]
    show_zero_versions [0]
    tar                [/usr/bin/tar]
    tar_verbosity      [v]
    term_is_latin      [1]
    term_ornaments     [1]
    test_report        [0]
    unzip              [/usr/bin/unzip]
    use_sqlite         [0]
    username           undef
    wait_list          undef
    wget               [/usr/local/bin/wget]
    yaml_load_code     [0]
    yaml_module        [YAML]


Then paste this little snippet, which sets most things right (it's a good decade to use a UTF-8 terminal):
o conf term_is_latin 0
o conf check_sigs 1
o conf make_arg -j3
o conf make_install_arg -j3
o conf makepl_arg PREFIX=/usr/local
o conf mbuildpl_arg --install_base /usr/local
o conf commit

This will have perl use two cores when compiling modules, and use Module::Signature if you install it, which seemed like a nice enough idea (at least if you have gpg installed; you may want to skip the check_sigs part, or install it first, if not -- via brew install gpgme for instance). And, most importantly, it'll put stuff in /usr/local at installation time, so perl will find it at invocation time. (Some day, that might just work right out of the box, in MacOS too.)

And that's it. Try it with cpan Module::Signature for instance, if you enabled that and you'll eventually end up with a fatter /usr/local/lib/perl5/site_perl. You are encouraged to drop other helpful tips here, especially if you know a thing or two about perl or cpan that I totally should have mentioned. Happy hacking! :-)


Musings on DNA, code and otherwise

From college days, I recall microbiology as one of the more fascinating fields I never opted to pursue, but still developed a latent interest in. The evolved, highly complex and cross-connected execution context of biology, at the deepest levels where it meets (or indeed is) the realities of chemistry and physics speaks to my geek genes. There are so many levels of exciting, beautiful and interesting stories, events and developments woven together and buried in genome! Not to mention it is open source!

You want to grok something? UTSL! If you don't yet understand it, it's because you are not knowledgeable enough. Go learn! Experiment. Think hard. Figure it out. And then share what you learned, so others can reach longer, make it do more, be more efficient, go faster, taste better, or what have you. I think choosing mad computer science over mad genetics was more of a cultural thing gut draw towards openness; away from the lock-ins of lawyers of looming heavy corporations protecting the intellectual off-spring stemming from the money invested into powering their geeks' forays into the unknown, and toward the almost open eco system of what today amounts to the web and places like github, where people promiscuously and chaotically spin off of each others' work making stuff better together, for fun and profit.

But back to genetic code, as inspired by a read of an article on MS, schizophrenia and bipolar disorder as related to endogenous retroviruses (meaning stuff we all carry as common heritage, though not activated in most of us). One of the things we were taught in my classes was about the huge amount of introns (DNA between genes that is seemingly not active, as it isn't observed to ever get transcripted into proteins -- which is essentially how microbiological code execution happens) observed in all living organisms (about 98% of human DNA is non-coding DNA). Of course the depth level of the insights we absorbed were rather meagre, and although omnipresent easy-access Wikipedia was not yet around, it felt fairly obvious that the rest, at the very least, would end up having all sorts of other functions or effects under conditions and circumstances that would relate to it somehow. If you have ever researched (or exploited) a buffer overflow in some program, you probably know that wherever the program pointer accidentally ends up, whatever junk or data happens to be in that memory region is suddenly, for all intents and purposes (or despite them :-), is now code! Maybe it will do something interesting. Or nefarious. Probably nefarious, really.

Biology, chemistry and physics, of course, just add a gazillion more dimensions to "functions and effects of things", while also imposing a bus-load of peculiar constraints, and introducing still more environmental conditions from the rather chaotic virtual machine of physics, chemistry and biology (being three different zoom levels at which things happen to shoot your code in the foot) combined.

If you thought debugging code is hard, it's actually a delightful walk in the park, for the most part -- free of all sorts of concerns like the ethics of feeding useful test arrays into your debugger, that your test subjects can't keep dying all the time until you figure it out, and so on, and that we're probably not debugging software that co-evolved on a planetary scale, over a couple of billion years, but was probably caused by, at the very most, a couple of decades worth of ingenuity from probably less than thousands (or at most millions) of humans, all more or less acting with actual intents, often worthy of words like "design" and "intent". We're kind of lucky, that way! And oftentimes, the whole code base is even your very own damn fault! And you could even know full well what every line of your code is supposed to do, even though you might not yet have discovered all the things it should probably also do to work under the conditions it'll be subject to when unforgiving users run it. (Even yourself!)

I wonder how far into the future we have to go to see iLife speaking to the creative class doing justice to its name, providing us with the tools to be creative with the workings of micro- (or, for that matter, even macro-)organisms, in the same manner it's letting today's happy amateurs go wild with arts like graphics, music and video. Lots of people have rather rigid ideas about what amounts to art, science, work, life style, communication and so many other arbitrary things you can do for whichever reasons propel you, based, among other things, on rules, rulers or methods used in each field, motivations behind them, et cetera. Try breaking a few; you end up in far more interesting places. And make and bring friends, along the way!


Reading Chrome, Firefox and Opera extensions in Emacs

If you're a fan of emacs' archive-mode that lets you load zip files and the like as it lets you load regular directories, and know that Firefox xpi, Chrome crx and Opera oex extensions are really just thinly wrapped zip files under the hood, you might expect Emacs to grok them, just the same. I did, and was a little surprised when it didn't plain work. (So maybe I'm running an old Emacs build?)

Anyway, I looked into the matter and did a little poking about to see what setup would do the trick. In the case of Firefox and Opera, it's really easy, as the xpi and oex "formats" only amount to changing the filename extension and making sure you have the right files in there -- so all you have to do is make sure that ("\\.xpi\\'" . archive-mode) exists in your auto-mode-alist, and ("\\.xpi\\'" . no-conversion) in your auto-coding-alist (there is a little snippet for your .emacs at the end of this post), and the corresponding oex variant.

In the Chrome case (which prepends a Chrome header with a format version, an RSA public key and signature from the extension author), besides the corresponding crx change for same variables, I also had to patch the lisp/arc-mode.el file lightly:

If you try loading a crx and find it already works, someone upstream already applied the same or a better patch than mine to your installation, but otherwise, read on. To find where it is on your system, type C-h archive-mode <return> and follow the archive-mode is a compiled Lisp function in `arc-mode.el'. link, apply the patch and M-x byte-compile-file the result again.

The .emacs snippet that makes it all kick in automagically:

Note that for the Chrome extension case, you won't be able to edit and save any of the contents in place, as you can with plain zip (xpi, oex) files; the magic to remove/re-add public keys and signatures will be somebody else's late Saturday hack to complete. :) Enjoy!