Testing your JavaScript with Jasmine and Grunt

by Andy Appleton

I’ve written about clean modular JavaScript and a nice benefit of writing code that way is testability. Recently I’ve been using Jasmine to write JS unit tests having also played with Mocha and Qunit. Jasmine has a clean interface, runs specs inside an HTML page and I find the finished spec files extremely readable thanks to rspec style test nesting.

Jasmine example spec output

Jasmine’s HTML spec runner is great for writing and running tests locally, but it is not very practical for integrating into a CI server – for that we need a command line tool.

Grunt

Grunt is a really cool build tool for JavaScript. By default it includes tasks for linting, concatenating & minifying and running Qunit tests in a headless Phantomjs instance. It has a nice collection of tasks available as modules on NPM including a few for running Jasmine specs.

To get up and running we need to install Grunt, Phantomjs and the Grunt Jasmine runner task from the command line (I’m assuming you have node and NPM installed):

$ cd /path/to/project
$ npm install grunt
$ npm install grunt-jasmine-runner

If you’re on a Mac and use Homebrew you can grab PhantomJS:

$ brew install phantomjs

For other platforms there are a number of pre built binaries available.

Now all we need to do is add the appropriate configuration to a grunt.js file in the project’s root…

module.exports = function(grunt) {
  grunt.initConfig({
    jasmine : {
      // Your project's source files
      src : 'src/**/*.js',
      // Your Jasmine spec files
      specs : 'specs/**/*spec.js',
      // Your spec helper files
      helpers : 'specs/helpers/*.js'
    }
  });

  // Register tasks.
  grunt.loadNpmTasks('grunt-jasmine-runner');

  // Default task.
  grunt.registerTask('default', 'jasmine');
};

…and run the tests!

$ grunt jasmine

Jasmine command line output

Backbone has made me a better programmer

by Andy Appleton

I saw Jeremy Ashkenas retweet this and it really struck a chord with me. I started playing with Backbone about a year ago and have since used it on large and small projects at work and for fun.

These last two weeks I have been refactoring some JavaScript on StickyGram and I was really surprised at the state of the code I had written not all that long ago. I have been rewriting it to use a number of design patterns that I have (somewhat) inadvertently picked up from Backbone (and underscore).

One object one responsibility

We all write clean encapsulated object oriented JavaScript right? Guys? No me neither – it’s so easy to just fire off an event, nest a few levels of callbacks and be done with it.

Here’s an example. We get some data via an xhr request, append it with a fade effect and then remove some other element.

$.ajax({
  url: '/wherever',
  success: function(data, status, xhr) {
    $('selector').append($(data).hide().fadeIn(function(){
      $('other-selector').remove();
    }));
  }
});

This is simple to write initially, but it becomes very tricky to extend. Say we also want to update another part of the document with a notice to say that the request was successful:

$.ajax({
  url: '/wherever',
  success: function(data, status, xhr) {
    $('selector').append($(data).hide().fadeIn(function(){
      $('other-selector').remove();
    }));
    $('yet-another-selector').text('Request successful');
  }
});

That success callback is gradually getting to be responsible for a lot of different things and is starting to get pretty tricky to read. This only gets worse as other developers come to the code and just need to add a small bit of extra functionality. I find myself cursing them later for the messy code when actually it’s my fault for writing it this way in the first place.

Here’s a much nicer way to deal with this:

function DataModel() {
  this.url = '/wherever';
}

DataModel.prototype.getData = function() {
  $.ajax({
    url: this.url,
    context: this,
    success: this.onSuccess
  });
};

DataModel.prototype.onSuccess = function(data) {
  $(window).trigger('DataModel:success', data);
};

var dataModel = new DataModel();

Now we have a dataModel object which is concerned only with making the request. It triggers an event when it is done which we can listen for in other objects.

function ListView(el) {
  this.$el = $(el);
  this.bindEvents();
}

ListView.prototype.bindEvents = function() {
  $(window).on('DataModel:success', $.proxy(this.addData, this));
};

ListView.prototype.addData = function(data) {
  this.$el.append($(data).hide().fadeIn(this.removeOtherThing));
};

ListView.prototype.removeOtherThing = function(data) {
  this.$el.find('other-selector').remove();
};

var listView = new ListView('selector');

Right now we have more lines of code, but what we also have is two separate objects each performing a discrete function. Each object has a number of methods and the methods each do one thing. It’s worth noting that we could easily write unit tests for this code – try doing that for the first example…

So now when we need to add another callback to the AJAX completion all we need to do is create a new object type which will listen for the same DataModel:success event and do its thing independently of our listView. That’s pretty awesome and it means that our original code can stay responsible for the one task it was designed.

Thanks, Backbone!

This example follows the same Model/View pattern as Backbone without actually using the library. Actually it’s just well structured code but it’s often easy to take the (seemingly) quicker option.

I’m spending more time up front thinking about the structure and maintainability of my code in the expectation that it will prevent it from getting to that tangled spaghetti-mess that we all hate coming back to and that is definitely a good thing.

Making Sublime Text 2 Beautiful

by Andy Appleton

I’ve been using the TextMate 2 alpha since christmas, but sometime in the last few weeks I got that sinking feeling that it was not getting better quickly enough and probably never would. I played around with Chocolat and Sublime Text 2 as potential alternatives and settled on Sublime because it is blazing fast and seems very stable.

It is blazing fast and stable but it’s also pretty ugly. This might not bother a lot of people, but it bothers me – this is the Mac, things are pretty here. This is what I did to make Sublime Text look, er sublime (sorry).

Replace that god-awful default theme

Screengrab of my Sublime Text 2 setup

Aqua is a great looking theme which brings a much more mac-like feel to Sublime. I took this theme and tweaked it slightly to show file icons in the sidebar. My variation on the theme is available on GitHub or you can add the repo to package control: git@github.com:mrappleton/aqua-theme.git.

A nicer icon

Replacement Sublime Text 2 Icon

I quite liked the default icon but found it really difficult to pick out when I was switching between apps with cmd-tab. I found this icon by Dmitry Svetlichny on Dribbble via a tweet from Chris Rowe. It’s easier to spot and I think much prettier too.

Syntax Highlighting

I was having trouble getting used to Sublime and I figured that it was down to things being similar but not identical to my previous editor. I switched from Railscasts to a new theme as a kind of constant reminder that I was using something different and not to expect it to be exactly like TextMate.

I love the dark on light Espresso theme included in Aqua which I have been using with a single tweak to make the background a very light grey and the line highlight white.

Espresso syntax hilighting

This tweaked version is included in my fork of Aqua.

Overall I’m really pleased that Sublime is flexible enough to allow this kind of customisation but it would be nice if the developers could spend some time on making it a more native Mac experience. In the mean time I’m really happy with my current setup.

JavaScript object creation performance

by Andy Appleton

I gave a talk at the London JS meetup last Monday evening where I compared the performance of two popular object creation patterns in JavaScript – the slides are available at Speaker Deck. Here’s a quick rundown of what covered.

Module Pattern

I like using the module pattern for a number of reasons – not least that it provides a new function scope which can be used to hide variables and methods from the public interface of the object. The module pattern looks like this:

var Module = function(arg){
  var self = {
    getArg: function(){
      return arg1;
    }
  };
  return self;
};

Constructor Functions

Constructor functionas are built in to JavaScript. Calling any function with the new keyword will cause the value of this to be bound to the function object. This means that we can assign methods and properties to this and access them later from the returned object.

var Constructor = function(arg){
  this.arg = arg;
};
Constructor.prototype.getArg = function(){
  return this.arg;
};

Performance Testing

I wanted to figure out which pattern is faster to instantiate and to call methods on so I set up a number of tests on JSPerf:

  1. Object instantiation
  2. Method calls on instantiated objects
  3. Object instantiation and method calls
  4. Object instantiation with inheritance
  5. Method calls on instantiated objects with inheritance
  6. Object instantiation and method calls with inheritance

Vague Conclusions

The test results seem to indicate that Constructor functions are faster at instantiating an object than the module pattern. This makes sense to me as they are built into the language.

The module patern seems faster at calling methods once the object is instantiated. Adding a level of inheritance into the mix doesn’t seem to make too much of a difference.

For me this is useful information to have in the back of my mind as I am coding. There was some interesting discussion in the Q&A after my talk about premature optimization and judging each case on its merits. We discussed

I think this kind of benchmarking is an interesting excercise and is useful to gain a better understanding of the language, but ultimately you need to be making performance decisions based on actual data from your actual code.

Speaking at London JS July 2012

by Andy Appleton

I’m speaking at the LondonJS meetup next Monday, 16th July. My talk will look at the relative performance of different approaches for creating JavaScript objects.

Tickets are all gone but you can get on the waiting list if you’re interested.

Require.js 2.0

by Andy Appleton

At some point over the last few weeks Require.js 2.0 was released. It’s a really nice update which addresses a couple of pain points which I have experienced whilst using it.

Non AMD Shim

My favourite new feature is the built in shim for loading non-AMD compatible libraries. Loading non-AMD JavaScript in 1.x either required altering the code to include a define call or using the order plugin to manually configure the load order for different plugins.

Neither option was particularly nice so in 2.0 we get the option to shim files in requirejs.config(), for example:

requirejs.config({
  shim: {
    'backbone': {
      // Define dependencies
      deps: ['underscore', 'jquery'],
      // Set a value for the file to export
      exports: 'Backbone'
    },
    'backbone.validation': {
      deps: ['backbone'],
      exports: 'Backbone.Validation'
    }
  }
});

This makes it much easier to update non AMD libraries without worrying about breaking a wrapper. There are a few caveats to using this, but on the whole it’s a big improvement.

Fallback paths

2.0 has introduced errbacks - callbacks for error states. This allows us to account for modules which don’t load. A nice use for this is to specify a fallback path for the module - e.g. if we’re using a CDN version of a file.

Again this can be set in requirejs.config():

requirejs.config({
  paths: {
    jquery: [
      '//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min',
      'lib/jquery'
    ]
  }
});

Require will attempt to load the module from each path specified in the array until it is succesful.

Lazy evaluation

In 1.x any built file (a bundle of a number of modules produced by the optimizer) would all be evaluated on load. This has changed in 2.0 and now code is only evaluated as it is called. This is a nice performance win as it defers a lot of browser work until it is needed.

These are my favourite new features, but there’s plenty more so check it out.

Mac-like text navigation in Zsh

by Andy Appleton

I switched my shell from Bash to Zsh a few months ago and have been tinkering with my dotfiles ever since.

One cool trick I’ve found is to configure Mac-like text navigation. I’m a Textmate user and my fingers are hard coded to hit cmd ← to skip to the begining of a line, opt → to jump to the end of a word etc.

I believe that OSX’s built in terminal does this automatically since 10.7, but I’m using iTerm2 so needed something else. Turns out it’s pretty straightforward to add to your .zshrc or .zshenv file:

# Skip forward/back a word with opt-arrow
bindkey '[C' forward-word
bindkey '[D' backward-word

# Skip to start/end of line with cmd-arrow
bindkey '[E' beginning-of-line
bindkey '[F' end-of-line

# Delete word with opt-backspace/opt-delete
bindkey '[G' backward-kill-word
bindkey '[H' kill-word

# Delete line with cmd-backspace
bindkey '[I' kill-whole-line

Some of the actual aliases (e.g. '[C') might be different for you. These can be customised in iTerm → Preferences → Keys.

If you feel like it, you can check out the rest of my dotfiles on Github.

Icomoon

by Andy Appleton

Icomoon is an in browser icon font generator. Pick from a selection of free icon fonts or pay for access to a larger collection. You can even upload your own SVG files to have them added to the font.

I used this to generate the icon font for the sidebar links on this site and I love it.

http://keyamoon.com/icomoon/ →

Redesign

by Andy Appleton

I’ve updated the site design. I’ve gone for something simpler and have tried to put more focus on the blog in the hope that it will encourage me to post more regularly.

I won’t go into too much detail as I know it’s pretty dull hearing people bang on about their own work, but here’s a brief overview of what’s new.

Design

This is a cleaner, simpler evolution of the previous design. Proxima Nova Condensed from Typekit is used throughout and the site should scale up and down nicely thanks to a custom icon font and SVG logos.

Back end

I’ve dropped Wordpress in favour of Jekyll. Jekyll is a static blog generator which means that I get to write posts as markdown formatted plain text and publish the site as flat HTML.

This is awesome for page load performance because it means zero back end processing. It is awesome for me as it means I get to keep the entire site (posts and all) in Git.

To do

Since the site is entirely static I have no means of gathering comments. I have still not decided how I want to address this but most likely I’ll either go to a Disqus / Livefyre embed or just drop comments completely.

Update: I’ve archived the old design at 2009.floatleft.com.

Future of Web Apps London 2012

by Andy Appleton

I’m excited to be speaking at FOWA London in October this year! I’ll be delivering a talk on Backbone.js application structure so if you’re thinking of coming along you can check it out on Tuesday afternoon.

Here’s the summary…

The Modern JavaScript Application

Backbone is an awesome JavaScript library which provides just enough application structure to allow us to make our own design decisions. It isn't a full application framework and much of the plumbing required to use it as one is left up to us.

But what should that extra plumbing look like? And how can a client side application grow and still remain organised and sane?

We'll talk about lessons learned the hard way and ways to make life simpler and focus on the app rather than the framework. We'll also look at all the auxiliary benefits of taking the time to structure our apps in a clean modular fashion.