Linux breaks 1% market share on the client? nice! Still not nearly enough. I can’t believe more people don’t use Linux, especially over Windows. I like Macs fine and enjoy OS X and several aspects of the user experience, but for programming, it’s hands down Linux, there’s nothing more straightforward. I even prefer Linux as a regular user, it’s on my PC (Ubuntu right now). I especially like the super fast and seamless installation of free and open source software. The only encumbrance is the lack of good-quality substitutes for some software products, I’m not totally satisfied with the word processing, vector graphics, and image editing open source software right now, but that could change if Linux keeps on increasing it’s market share. It’s FREE, people! All you need to do is burn it on a regular CD-R, insert the CD in your computer, and hit F11 or whatever when you reboot your computer. Unbelievable.

Mellon Collie Cover Art

I have always loved the album art for The Smashing Pumpkins’ Mellon Collie and the Infinite Sadness so I’m posting it here, some of the images might be from The Aeroplane Flies High box set. It was illustrated by John Craig and designed by Frank Olinsky (also responsible for Sonic Youth’s iconic covers (-:). More info from AIGA: Apparently the Smashing Pumpkins font is Glorietta.

front cover:

back cover:






Bug ID Helper Firefox Add-on

For awhile, the context menu add-on (from my previous post) seemed to be helping me out well enough with bugzilla bug ids on MXR and emails and stuff like that. But after re-evaluating my workflow and some of the nice features on the domain itself, I ended up finding linkification of bug ids to be much more useful and productive. Actually, even more than that, I found adding tooltip descriptions to bug ids to be more useful. I often don’t actually want to open the bug, I just want to know what the bug is about, or refresh my memory a bit. So I made a Bug ID Helper Firefox/Thunderbird add-on. Bug Id Helper has three basic features:

Linkification – linkifies bug ids in web page text:

linkification of bug id

Tooltipification – adds descriptive tooltips to bud ids:

tooltip over bug id

Context-menu – adds menu item when number selected:

context menu with bug id option

By default, this will linkify and add tooltip text to any bug id in every webpage you load, but there are options to just have tooltips, just add links, or only linkify a whitelist of websites. There are also options (that can be edited from add-ons manager) for different combinations of bug information displayed in the tooltip text and although I set it to, you could change it to or something else in the preferences. For instance, it would linkify this: Bug # 1389 like so.

I like to think that it’s quiet and fast. I’ve had it turned on for a week or so and haven’t really noticed it, which is good. I tried to optimize for the common case (no bug ids on webpage) so speed-wise it shouldn’t be noticeable.

Technical crap

Linkification and tooltipification brought up a lot of issues and I ended up learning a lot about DOM traversal and manipulation, XPath, XHR, and regex speed.

To find the occurances of bug id’s, I wanted to find all the text in the content that looked something like “bug 2375” or “BUG #31721” The very first way I did this was to recursively walk through the DOM tree, executing a regex against the content of any text node and searching any other node that wasn’t in a blacklist of bad nodes (meta, img, applet, etc.). I linkified by snipping out the bug text with the splitText function and replacing these matches with new anchor nodes. This was nice and intuitive, but also very slow. On pages with thousands of lines of text, or a lot of individual text nodes, this parsing would take whole seconds to execute.

Then I found out about XPath. I honestly had no idea. Instead of walking the tree, I queried for all the text nodes in the document (minus ones with bad ancestors), and iteratively searched through each one for bug id occurances. This cut out a serious amount of overhead off of the search and linkification time. Still, some sites took a noticeable amount of time to linkify, so it still wasn’t cool.

Then I looked at the regex. I was looking for occurances of “bug” paired with a number and whitespace/word boundary characters surrounding it. My regex looked something like /(\s|\b|^)+bug\s*(\d+)/, After toying with it for a little bit, I noticed that taking out the test for whitespace in front of the bug id made it dramatically faster,  something like this: /bug\s*(\d+)/. This makes sense, because as soon as a character is read in, if its not a ‘b’ then there’s no chance of a match. In fact, just taking out the ‘+’ quantifier and testing for one character of whitespace or boundary made things fast enough, /(\s|\b|^)bug\s*/(\d+)/ . I guess there is a lot of whitespace and commas, etc in text and catching these at the front of a regex is not the best idea. I would love to know more about how regexes and DFAs actually work because for some reason I end up using regexes all the damn time, kind of makes me want to take FLAC in the spring…

So after refining my regex and incorporating XPath, things were faster (2 – 10 times faster to be specific). But there were still some webpages that took enormous amounts of time to grovel through and linkify. There was one shopping website with about six thousand text nodes that took over 4 seconds to go through!

Finally, I noticed how fast the “Find” functionality was in Firefox for finding occurences of words in content. There was one downside, Find (nsIFind) only supported literal text and not regular expressions. But this ended up being fine. While I liked to capture the number in my regex and check for separation characters, I really only wanted to examine regions of text that had the exact string “bug” in them. And on almost every webpage I viewed (like the shopping one) there were no occurances of that word, so Find would know immediately if I needed to pay attention to the webpage at all. Find also had the pleasant side-effect of returning a DomRange where the bug text occured. I could easily change the endpoints of the range and execute my regex against this very small region of text. Furthermore, linkifying it just involved wrapping the range with the surroundContents function. And boy did it speed things up.The average webpage now took about 20 ms to grovel through and I never came across a site that took more than half a second ( takes about 200 ms, mxr of browser.js is about 500 ms).

Update: I found out that searching for “bug” with XPath is just as fast as searching for it with nsIFind, the only problem is it doesn’t return the DomRange where it is, just the text node, which you have to search again with your own regex to find the match (unless there is some crazy XPath query for it, let me know!). Needless to say, XPath was easier to work with in this situation, so I switched back to XPath.

I also had a lot of fun with the tooltip and the various ways to get bug information over HTTP, but I’ll save some of that for another post maybe.

Sitting in an English Garden Waiting for the Sun

Every time I listen to I am the Walrus, I think about this rose garden in London. While my mom was working, my dad and my brother and I used to sit in the garden eating crackers and cheese and my dad would eat kippers. Unlike The Rainbow Goblins, the internet has not helped me find what it is called (my parents can’t remember either). It’s on the river near a bridge and it is right beside a church where the bells chime every hour.

Overriding Native Styling

If styling a XUL element just doesn’t seem to work it may be because that element has default platform-specific styling applied to it. I ran into this problem when I was trying to change the background color of a textbox. To disable this just add this to your CSS:

.disableMoz {
 -moz-appearance: none;

and add disableMoz to the class name of the element you’re trying to style. Of course, there is probably a reason that this style is the default and you should consider the stylistic implications of changing it.


The power of the internet and the death of mystery

Today a nagging thought came up that has persisted in my head for the last several years.  I was looking at Kokoschka paintings for art class and got a sudden vision of this old children’s book my parents used to read me when I was a kid. I just remember how dark and beautiful I thought this book was even as kid. I remembered the pictures vividly. Leprechuans dripping in paint of all different colors, huddled around, another scene of the creatures stretching something in the sky. Years ago I asked my parents if they remembered what it was, but they had no idea what I was talking about, so I thought I might have made it up. It’s hard to google just from the pictures in your head. I tried for several years with no yeild. Tonight, I resolved to find out what the book was. I searched for  “leprechauns rainbow” and “paint leprechauns children’s book” and several variations of this. Finally, somewhere in the results I saw the phrase “The Rainbow Goblins” and I knew I had found it! It was so good to see the pictures again, I felt at peace.

The Rainbow Goblins

At the same time I was dissapointed that it was all over. It was this wonderful hazy mystery to me. To find it, and to find it so easily today was odd. I was supposed to find it again in my attic or at some flea market in Belgium.  Or never find it and always wonder if it was all in my head.  The internet has changed things like that. Stuff like this isn’t a mystery anymore, people aren’t a mystery anymore, you can find out pretty much anything about anyone. You can never wonder what someone is doing ‘right now’ anymore because they are twittering it all over the place. Everything is so connected and for some reason, this makes me feel safe, but with that comes a huge loss in some quality I can’t really describe.