Firefox OS, JavaScript, Mozilla, Open Source, Programming, Software, Web Applications

Introducing Reenact: an app for reenacting photos

Here’s an idea that I’ve been thinking about for a long time: a camera app for your phone that helps you reenact old photos, like those seen in Ze Frank’s “Young Me Now Me” project. For example, this picture that my wife took with her brother, sister, and childhood friend:


Reenacting photographs from your youth, taking pregnancy belly progression pictures, saving a daily selfie to show off your beard growth: all of these are situations where you want to match angles and positions with an old photo. A specialized camera app could be of considerable assistance, so I’ve developed one for Firefox OS. It’s called Reenact.

The app’s opening screen is simply a launchpad for choosing your original photo.


The photo picker in this case is handled by any apps that have registered themselves as able to provide a photo, so these screens come from whichever app the user chooses to use for browsing their photos.



The camera screen of the app begins by showing the original photo at full opacity.


The photo then will continually fade out and back in, allowing you to match up your current pose to the old photo.


Take your shot and then compare the two photos before saving. The thumbs-up icon saves the shot, or you can go back and try again.


Reenact can either save your new photo as its own file or create a side-by-side composite of the original and new photos.


And finally, you get a choice to either share this photo or reenact another shot.




If you’re running Firefox OS 2.5 or later, you can install Reenact from the Firefox OS Marketplace, and the source is available on GitHub. I used Firefox OS as a proving ground for the concept, but now that I’ve seen that the idea works, I’ll be investigating writing Android and iOS versions as well.

What do you think? Let me know in the comments.

HTML, JavaScript, Photography, PHP, Programming

Turn your iPhoto library into a Web photo album

iPhoto is good for managing and organizing photos and photo metadata, but it’s not easy to get that information back out if you want to share more than a few photos. I recently finished scanning 13,000 family photos and importing them into iPhoto, and I wanted to be able to share all of those photos (complete with the faces I spent hours tagging) with my brothers and sisters.

I could just burn copies of the iPhoto library to discs, but not all of my siblings have Macs, and iPhoto may not be around for very long. I needed a way to export all of the photos and metadata in a format that I felt comfortable would be supported for a long time: the Web.

The result was a PHP script that exports an iPhoto library into folders of image files (one folder per event), generates JSON arrays of event and photo metadata, and builds a minimalist JavaScript-powered website that provides a simple photo viewing experience. The website can be put online, or it can be run entirely offline (like from a DVD, which is my plan for sharing with my family members). The code is all open source ( and the usage instructions are in the README.

Here’s a screenshot of the main page of the website it generates:


And here’s an example of a single photo’s page:

I know it’s a pretty niche project, but hopefully it will come in handy for anyone looking to make their iPhoto library more shareable and accessible, especially as Apple drops support for iPhoto in the near future.

iPhoto Disc Export

JavaScript, Programming, Typo.js

A practical application of Typo.js

I released Typo.js about two years ago as a pure JavaScript implementation of the Hunspell spellchecker. I’ve been using it in Comment Snob for Chrome, but I haven’t seen many other projects using it. (JavaScript spellchecking is a very narrow niche, to be fair.)

A few days ago, however, I was made aware of a new project using Typo.js: NoTex. It’s a browser-based reStructuredText editor.

NoTex screenshot

The author, Hasan Karahan, has used Typo.js to support 87 (!) different dictionaries. I’m happy to report that spellchecking in the app is smooth and indistinguishable from the native browser spellchecker.

You can follow NoTex’s development on GitHub.

Browser Add-ons, Comment Snob, Google Chrome, JavaScript, Mozilla, Mozilla Firefox, Programming, Software, Technology, YouTube Comment Snob

Announcing Typo.js: Client-side JavaScript Spellchecking

When I first ported YouTube Comment Snob to Chrome, Chrome’s lack of a spellchecking API for extensions meant that I would be unable to implement Comment Snob’s most popular and distinguishing feature: the ability to filter out comments based on spelling mistakes. That, my friend, is about to change.

I’ve finished work on the first version of a client-side spellchecker written entirely in JavaScript, and I’m calling it Typo.js. Its express purpose is to allow Chrome extensions to perform spellchecking, although there’s no reason it wouldn’t work in other JavaScript environments. (Don’t use it for Firefox extensions though; use Firefox’s native spellchecking API.)

How does it work?

Typo.js uses Hunspell-style dictionaries – the same ones used in the spellcheckers of and Firefox. (Typo.js ships with the latest American English dictionary, but you could add any number of other dictionary files to it yourself.) You initialize a Typo.js instance in one of two ways:

Method #1

var dictionary = new Typo("en_US");

This tells Typo.js to load the dictionary represented by two files in the dictionaries/en_US/ directory: en_US.aff and en_US.dic. The .aff file is an affix file: a list of rules for creating multiple forms of a word by adding prefixes and suffixes. The .dic file is the dictionary file: a list of root words and the affix rules that apply to them. Typo parses these files and generates a complete dictionary by applying the applicable affix rules to the list of root words.

Method #2

var dictionary = new Typo("en_US", affData, dicData);

With this initialization method, you supply the data from the affix and dictionary files. This method is preferable if you wish to change the location of the affix and dictionary files or if you are using Typo.js in an environment other than a Chrome extension, such as in a webpage or in a server-side JavaScript environment.

Once you’ve initialized a Typo instance, you can use it to check whether a word is misspelled:

var is_correct_spelling = dictionary.check("mispelled");


Depending on your needs, you can configure Typo.js to perform word lookups in one of two ways:

  1. hash: Stores the dictionary words as the keys of a hash and does a key existence check to determine whether a word is spelled correctly. Lookups are very fast, but this method uses more memory.
  2. binary search: Concatenates dictionary words of identical length into sets of long strings and uses binary search in these strings to check whether a word exists in the dictionary. It uses less memory than the hash implementation, but lookups are slower. This method was abandoned as it became impractical to implement for some features.

See this blog post by John Resig for a more detailed exploration of possible dictionary representations in JavaScript.

Practice vs. Theory

Typo.js is already in use in my Comment Snob extension. You can install it today to experience Typo.js in action, filtering comments on YouTube based on the number of spelling mistakes in each one.

What’s next for Typo.js?

The next step is adding support for returning spelling suggestions; right now, all Typo.js can do is tell you whether a word is spelled correctly or not. It also needs to support Hunspell’s compound word rules. These are the rules that a spellchecker uses to determine whether words like “100th”, “101st”, “102th” are correct spellings (yes, yes, and no, for those of you keeping track) since it would be impossible to precompute a list of all possible words of these forms.

The Typo.js code is available on GitHub. I welcome any and all suggestions or code contributions.

Browser Add-ons, JavaScript, Mozilla, Programming

Uploading form data and files with JavaScript (Mozilla)

One problem I stumble across occasionally in writing Firefox extensions is properly uploading form data that includes a file – that is, assembling the POST request in JavaScript while still maintaining the sanctity of any file or string data. You can’t just do this:

var request = "--boundary\r\n some text\r\n--boundary" + fileBytes + "\r\n--boundary--";

I had to spend a bit of time getting this just right in order to allow ScribeFire to upload media to Posterous, so I’m posting below the final solution at which I arrived; it was cobbled together from a dozen different examples I found around the Web (none of them solving the full problem), then lovingly massaged into the elegant function you see before you. With this function, you can pass in an array of fields and files, and the request will be crafted and returned to you, ready for upload.

Instructions for use are in the comment block at the top of the function.

function createPostRequest(args) {
   * Generates a POST request body for uploading.
   * args is an associative array of the form fields.
   * Example:
   * var args = { "field1": "abc", "field2" : "def", "fileField" : 
   *              { "file": theFile, "headers" : [ "X-Fake-Header: foo" ] } };
   * theFile is an nsILocalFile; the headers param for the file field is optional.
   * This function returns an array like this:
   * { "requestBody" : uploadStream, "boundary" : BOUNDARY }
   * To upload:
   * var postRequest = createPostRequest(args);
   * var req = new XMLHttpRequest();
   *"POST", ...);
   * req.setRequestHeader("Content-Type","multipart/form-data; boundary="+postRequest.boundary);
   * req.setRequestHeader("Content-Length", (postRequest.requestBody.available()));
   * req.send(postRequest.requestBody);
  function stringToStream(str) {
    function encodeToUtf8(oStr) {
      var utfStr = oStr;
      var uConv = Components.classes[""]
      uConv.charset = "UTF-8";
      utfStr = uConv.ConvertFromUnicode(oStr);

      return utfStr;
    str = encodeToUtf8(str);
    var stream = Components.classes[";1"]
    stream.setData(str, str.length);
    return stream;
  function fileToStream(file) {
    var fpLocal  = Components.classes[';1']

    var finStream = Components.classes[";1"]
    finStream.init(fpLocal, 1, 0, false);

    var bufStream = Components.classes[";1"]
    bufStream.init(finStream, 9000000);
    return bufStream;
  var mimeSvc = Components.classes[";1"]
  const BOUNDARY = "---------------------------32191240128944"; 
  var streams = [];
  for (var i in args) {
    var buffer = "--" + BOUNDARY + "\r\n";
    buffer += "Content-Disposition: form-data; name=\"" + i + "\"";
    if (typeof args[i] == "object") {
      buffer = "; filename=\"" + args[i].file.leafName + "\"";
      if ("headers" in args[i]) {
        if (args[i].headers.length > 0) {
          for (var q = 0; q < args[i].headers.length; q++){
            buffer += "\r\n" + args[i].headers[q];
      var theMimeType = mimeSvc.getTypeFromFile(args[i].file);
      buffer += "\r\nContent-Type: " + theMimeType;
      buffer += "\r\n\r\n";
    else {
      buffer = "\r\n\r\n";
      buffer += args[i];
      buffer += "\r\n";
  var buffer = "--" + BOUNDARY + "--\r\n";
  var uploadStream = Components.classes[";1"]
  for (var i = 0; i < streams.length; i++) {
  return { "requestBody" : uploadStream, "boundary": BOUNDARY };
Browser Add-ons, Facebook, Facebook Image-to-Email, JavaScript, Mozilla Firefox

Facebook Image-to-Email: Broken Again

I am aware that the Facebook Image-to-Email Firefox extension is (once again) broken, and given that version 1.1 installed on Firefox was working, and now version 1.1 installed on Firefox is not working, it has to be due to a change that Facebook made. The problem is that I can’t discern any relevant changes in Facebook’s profile pages that would cause a problem.

The crux is this: I’m getting an NS_ERROR_DOM_SECURITY_ERR error when trying to run context.getImageData(). From what I’ve read, this implies that the JavaScript is not in the same domain as the image that was fed into the canvas and/or does not have permission to know the image’s contents, but as far as I can tell, Facebook didn’t change where the e-mail images are coming from, so that would seem to be a strange problem to have.

Any insight into this is appreciated.

Browser Add-ons, Facebook, Facebook Image-to-Email, JavaScript, Mozilla Firefox

Convert Facebook e-mail images to actual e-mail links

The massively popular social network Facebook uses images to display the e-mail addresses of your friends, making it impossible to copy the e-mail address or click on it to send e-mails to your friends, thus making Facebook’s own proprietary in-site messaging system more attractive to its users. Yesterday, Gervase Markham posited that it should be possible to determine the text displayed in the image programmatically by way of the canvas tag and some JavaScript. I’m writing this to confirm that it is indeed possible and has been achieved.

The extension I’ve written to do this is called Facebook Image-to-Email. On any Facebook page containing an e-mail address image, the extension converts the image to text using the following workflow:

  1. Copy the image to a canvas using drawImage()
  2. Scans through the canvas to find matches for a pre-determined set of character sprites using getImageData
  3. Replaces the image with a clickable text e-mail address

Here’s an example of a “before” view:


And “after”:


Of course, this appears to be pretty simple. Ironically, the hardest part of solving this problem was the only part that Gervase remarked would be trivial: matching the image against a set of known character patterns. Since the font is not monospaced, and certain letters bleed into each other when adjacent (such as “89” and “ef”), it wasn’t possible to just store the pixel values for each full letter. What I ended up doing was only matching against only the center of the letters (which are never affected by adjacent letters) and just ignoring character edges.

One other detail as to the implementation: there appears to be some sort of security restriction in Firefox on reading data from images that are not in the same domain as the script reading them. For example, trying to call getImageData() from the chrome on a canvas that contained an image loaded from returned null every time; the same happened if the script was running locally but loading a remote image. For this reason, the actual scripting that converts the image to text has to be injected into each page that requires it so that it appears to be running in the same domain as the image.

I’m not claiming that this is the most efficient implementation, but it is definitely complete. In my testing so far, it has correctly identified 100% of the e-mail addresses displayed in the images.

JavaScript, Mozilla Firefox, Netscape Navigator, Pownce, Safari, Web 2.0, Web Applications

Pownce has a big security problem

Kevin Rose’s latest project, Pownce, has a glaring security problem on its front page. The JavaScript that Pownce uses in its login form can reveal your password in plain text on the screen. Here are the steps to reproduce the problem in Firefox:

  1. Login to Pownce via Allow Firefox to save your login information for next time, and then log out.


  2. Navigate to and type the first part of your username in the “Enter username…” box. Firefox will supply all of the matching usernames it remembers for this site. (So far, so good.)

    Using Firefox

  3. Select your username and press return to have the browser autofill the rest of your information. Oh look, there’s your Pownce password in plain view! I hope no one in the room was watching you login…

    Hey look, it

The method that Pownce is using to show the “Enter password…” prompt in the password field is the reason for this malfunction; browsers force all text in password fields to be hidden with asterisks, so if you want to show normal text in a password field like Pownce has chosen to, you have to do so in a non-standard way.

This bug affects Firefox and Netscape users who have JavaScript enabled, but it doesn’t affect Safari users.