Ruby Gem install – mkmf.rb can’t find header files for ruby problem

Just a REALLY quick post. I’ve been seriously wanting to write a small ruby project for a long time now, and have finally been able to get a start on it. However, as I suspected, not everything is always smooth. I tried to do a gem install for some dependant libraries I’ll need, and got the following error:

System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h

If you’re looking for a solution, then try the folloing:

  1. Load up xCode (assuming you have it installed)
  2. Once open, click on xCode -> Preferences -> Downloads
  3. In the list presented, click on the Command Line Tools, and install
  4. Restart
  5. Job Done!

This problem dissappeared for me, and I hope it helps whoever else experiencing this annoying problem!

Incidentally, I am using:

Mac OSX 10.8.2
Ruby Version: ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0]

Watch this space for my first steps into the ruby world!

TDD – Have a Common Understanding in the Team

Recently I’ve been paying more attention to my style and approach to  TDD, and in doing so, I’ve begun to notice a few interesting things which, to me, seemed natural and almost “part of the process”, but clearly thats not the case with everyone. When I start on a dev task I typically try to ask as many of the right questions(see my previous post about avoiding autopilot) before I set off diving into code. I like to set the expectations of what it is that needs to be achieved, which is perfect in the world of TDD, because ultimately your tests should be testing the expectations match what actually happens.

I’ve come to learn that this isn’t necessarily the same “naturally assumed” approach which other developers might take. And along with that, there is mostly a misunderstanding within the team about what constitutes a focussed test, as well as what constitutes the definition of a category of tests. i.e. What qualifies as a unit test vs an integration test? And when to use them?

An experiment – collecting opinions

I thought I’d embark on a quick experiment and ask colleagues around me what their view on TDD is. Not surprisingly, there were a lot of different views and almost everyone I spoke to had a somewhat different idea of what TDD should be. Some advocating integration heavy tests to ensure business scenarios work, others advocate unit tests for quicker feedback etc. But what I found most interesting is that the definition of what a unit, integration, functional, end to end tests varied depending on who I spoke to! It became clear that is was common place to hear different vocabulary being used to describe the same end goal. For example I’d heard the terms “system testing”, “integration testing” and “acceptance testing” as well as “UAT” to all mean the same thing!

Clearing the  air

Obviously its fruitless to be embarking on a TDD journey if everyone int the team has a different idea of what its supposed to be. Most of the time when you hear the acronym “TDD” its automatically assumed to be unit tests that you’re writing, and for the large part, that might hold true. But I’m going to include other categories of testing that I feel should form part of the TDD paradigm. The definitions I supply below are not to be taken as solid regimented definitions, but more a way for teams to have a common understanding of what each area of TDD you’re targeting.

The Definitions

So lets clear the air a bit and define what is what. I’m obviously not the first person to write about the difference between unit and integration tests etc, but here’s what I find works:

Unit Test

A test that focuses on a single segment of behaviour. i.e. test that adding money to an account increases the overall balance. These tests should provide feedback as quickly as possible, and must run on every code change. And most importantly, a unit test is not something that touches 3rd party libraries/frameworks or resources which you have no control over. This includes interactions with a database, files system, and network comms as an example. Some might dispute that interactions between classes or modules that even you have written would be dumped in the integration category, but I disagree.

Integration Tests

Focussed on ensuring aspects of your code that interact with 3rd party libraries/frameworks and resources interact in the way you expect them to. i.e. writing an export file to disk, or saving an entity to the database successfully. These tests probably wouldn’t be run quite as often as your unit tests, as they could become quite heavy over time as the application grows. Typically, in most organisations I’ve worked in, these tend to run on every continuous integration build, and not necessarily on every code change – which included changes made while developing.

End to End/Acceptance Tests

Ensures that a business feature works as expected. This could include real world business scenarios from either through an API that has been built, or even through the UI – although this could be a whole other topic for another time. The point is, these end-to-end tests won’t run as frequently, and they will only test the core business scenarios. They involve complete vertical calls, and tend to be very CPU intensive, and are known to take large amounts of time to run. Especially as the application grows. Typically, you would have these run once a day, perhaps on a scheduled nightly build. If you happen to have introduced BDD into your development environment, these would be the tests I would expect to see running here.

So in summary, the real point here is that I was a little surprised that still to this day, many developers have a different take on what TDD is, and how to approach it, which might be fine, but in order for a team to produce good quality solutions and efficient code bases, then at the very least these teams all need to have a common understanding of what sort of tests they’re writing. Without it, your on a road, that will end in pain. Big pain.

Avoid “autopilot” behaviour when developing

I love coffee. And there’s a pretty good coffee place close to where I work that I regularly frequent out of the addiction I have now acquired for caffeine. Not to mention that I may possibly  need it in order to turn me into a less “abrasive” individual in the morning. 🙂 For the passed week or so, the conversation has gone a little like this:

Barista: Hi there, what I can I get you?

Me: I’d like a medium white americano, to have in please.

Barista: Americano?

Me: Yes

Barista: What size?

Me: Medium please.

Barista: To take away?

Me: No, to have in.

Barista: Would you like milk with that?

Me: Yes.

Now, this happens almost every time I go, and it got me thinking about some particular traits I’ve picked up on while working with some inexperienced software developers. When given a requirement, they almost instantly go into autopilot, and just begin coding, not always asking the right (or even at all!) the much needed questions that they should. On the other extreme, as in the case of the barista, sometimes they ask all the questions which were already answered in the first statement! As you can imagine, both ends of the scale result in a lot of wasted time. Clearly this barista was(and still is!) on autopilot, just asking the usual questions, and not really listening.

Inexperienced software developers have a tendency to just want to get stuck in a code away. I’ve been there, done that, and also subsequently been burnt by it. Resist this urge. Don’t assume you know the domain well enough to begin coding. Look at the problem you’re trying to solve first. Listen to the requirement and the context its in, and try to ask the right questions, and especially try not to ask the same questions which have already been answered, except for in the case where you are confirming your understanding.

I honestly think that if every developer did this, fewer bugs, fewer misunderstandings, and fewer under delivering to expectations would occur. If you need a little help on fighting the urge to code, then I’d suggest that you use the 5 Whys approach. If anything, this approach should help you contextualise the problem, and promote a discussion to lead you to ask the right questions before rushing in to code.

Stubbing void methods with Mockito

I’ve been meaning to post something about this topic for a while, but it seems to have been slipping my mind. This post is targeted at those of you who use Mockito for your testing(Mockito is brilliant btw, and I applaud the folks over at Mockito for making this library!)

One thing that I got in the habit of was typing the when(mockClass.methodCall(eq(arg1))).thenCallRealMethod() etc, you’ll soon notice that if you’re trying to mock a void method, intellisense in any of your IDE’s will probably fail to allow you to select the methodCall method. There is an answer, and its way better than what was suggested in earlier versions. There is now a suite of doAnswer, doCallRealMethod, doNothing, doReturn(object), doThrow(exception) and the list goes on.

If you’re struggling to mock a void method, this is how you should go about it. Assuming you have a class that you would like to mock:

public class SampleClass {
   private String defaultValue = "default";

   public String getDefaultValue() {
    return defaultValue;

   public void setDefaultValue(String defaultValue) {
    this.defaultValue = defaultValue;

Here is a test class, mocking the behaviour of SampleClass, and setting up the test so that calls to the void method setDefaultValue are actually called, and subsequently verified with mockito.

public class SampleClassTest {

  private SampleClass sampleClass = mock(SampleClass.class);
  private String newDefaultValue = "Changed";

  public void testOldWayOfVoidMethodMock(){

//   This is the old approach to attempting to mock void methods. I suggest
//   you avoid this in favour of the new method which is much
//   cleaner and more readable.

     stubVoid(this.sampleClass).toAnswer(new Answer() {
       public Object answer(InvocationOnMock invocation) throws Throwable {
         return null;



  public void testNewWayOfVoidMethodMock(){
//    Previously you would have been tempted to write something like
//    this to test void methods...
//    when(sampleClass.setDefaultValue(eq(newDefaultValue))).thenCallRealMethod();

//    However, this is how you should treat mocking void methods with
//    Mockito. Works with or without methods that require arguments
//    of course.



I’ve included two test methods to demonstrate the old suggested way, and the newly much more readable way. If you need more help then I suggest heading over to the mockito documentation which is pretty useful.

Appfuse – The defined artifact is not an archetype

If ever you run into an issue with trying to create a project from an appfuse archetype template inside your favourite IDE and recieve the following error message:

Failed to execute goal org.apache.maven.plugins:maven-archetype-plugin:2.2:generate (default-cli) on project standalone-pom: The defined artifact is not an archetype -> [Help 1]

Then rather go here: and run the command line version. It’ll save you the headache!

Hash based Message Authentication Code hashing in Spring MVC

Hash based Message Authentication Code(HMAC) is used in scenarios where you need to validate the validity and authenticity of a message. Recently I was implementing some security aspects for our REST service, and I noticed that Spring Security currently supports SHA hashing for password, but no HMAC SHA hashing.

Now firstly, this is really for informational sakes, and I’m in no position to advise which way is better, but instead, I’m providing what I think I’d rather have implemented, and its purely a personal preference – for me anyway.

So, now back to the point. Firstly, below is a sample of a security configuration(in xml) using one of the many Spring Security provided implementations of the PasswordEncoder interface – namely the ShaPasswordEncoder.

      <password-encoder hash="sha"/>
         <user name="jimi" password="d7e6351eaa13189a5a3641bab846c8e8c69ba39f" authorities="ROLE_USER, ROLE_ADMIN" />
         <user name="bob" password="4e7421b1b8765d8f9406d87e7cc6aa784c4ab97f" authorities="ROLE_USER" />

Now in this simple contrived sample config, we’re telling Spring Security to use their implementation of the ShaPasswordEncoder. Upon inspection of this implementation, you’re also able to specify the field responsible for holding the salt value if you so choose to provide one. In the REST service I have, I have defintely opted to provide a salt in which to further secure the hash.

In this ShaPasswordEncoder implementation provided by Spring, it performs a hash on the raw concatenated string in the format of “password{salt}” . This surprised me somewhat, because I had actually expected Spring to provide an implementation of HMacSHA when providing a salt, not just an implementation of SHA! And believe me theres a difference! So the purist in me came out, and I couldn’t let this rest (see what I did there? rest….REST?.. ok I know, perhaps a little lame, but I digress.)

The main difference between the two implementations, is that the way Spring have done it, they have just effectively hashed a string. With HMac, the salt actively plays a role in the encryption of the raw string. Some might argue that they arrive at the same end result, which is a hashed string based on the salt. I’d tend to agree, but the purist in me wanted to do it properly, so I wrote my own implementation of what the HMacSha should have been. Below is a simple snippet of what the code looks like when just using a normal SHA hash, and when using a HMacSHA. In these samples below I’ve only shown the code which actually does the hashing so you can see what the difference is. Also bear in mind, that my custom rolled hashing class extends the PasswordEncoder class so that I can include it in the security config xml.

SHA Implementation

public String encodePassword(String rawPass, Object salt) {
 String saltedPass = mergePasswordAndSalt(rawPass, salt, false);

MessageDigest messageDigest = getMessageDigest();

byte[] digest = messageDigest.digest(Utf8.encode(saltedPass));

// "stretch" the encoded value if configured to do so
 for (int i = 1; i < iterations; i++) {
 digest = messageDigest.digest(digest);

if (getEncodeHashAsBase64()) {
 return Utf8.decode(Base64.encode(digest));
 } else {
 return new String(Hex.encode(digest));

protected final MessageDigest getMessageDigest() throws IllegalArgumentException {
 try {
 return MessageDigest.getInstance(algorithm);
 } catch (NoSuchAlgorithmException e) {
 throw new IllegalArgumentException("No such algorithm [" + algorithm + "]");

protected String mergePasswordAndSalt(String password, Object salt, boolean strict) {
 if (password == null) {
 password = "";

if (strict && (salt != null)) {
 if ((salt.toString().lastIndexOf("{") != -1) || (salt.toString().lastIndexOf("}") != -1)) {
 throw new IllegalArgumentException("Cannot use { or } in salt.toString()");

if ((salt == null) || "".equals(salt)) {
 return password;
 } else {
 return password + "{" + salt.toString() + "}";

HMac Implementation

protected final Mac getMac() throws IllegalArgumentException {
 try {
 return Mac.getInstance(algorithm);
 } catch (NoSuchAlgorithmException e) {
 throw new IllegalArgumentException("No such algorithm [" + algorithm + "]");
 public String encodePassword(String rawDataToBeEncrypted, Object salt) {
 byte[] hmacData = null;
 if(rawDataToBeEncrypted != null){
 try {
 SecretKeySpec secretKey = new SecretKeySpec(rawDataToBeEncrypted.getBytes(ENCODING_FOR_ENCRYPTION), this.algorithm);
 Mac mac = getMac();
 hmacData = mac.doFinal(salt.toString().getBytes(ENCODING_FOR_ENCRYPTION));

if (isEncodeHashAsBas64()) {
 return new String(Base64.encode(hmacData), ENCODING_FOR_ENCRYPTION);
 } else {
 return new String(hmacData, ENCODING_FOR_ENCRYPTION);

 catch(InvalidKeyException ike){
 throw new RuntimeException("Invalid Key while encrypting.", ike);
 catch (UnsupportedEncodingException e) {
 throw new RuntimeException("Unsupported Encoding while encrypting.",e);
 return "";


I am using this version of the hashing algorithm for my sample REST service which I have up on Github. You can check out the java file itself here, but I’ll give a more detailed run down of the sample project in my next blog post, which is about supporting stateless authentication in a Spring REST application, and how I went about rolling my own security filter/mechanism to support this. Watch this space!

JQuery – Wait for multiple AJAX calls to return before continuing

I have to say, I love JQuery. Before 2 years ago, I was a complete and utter JavaScript noob, and quite frankly I feared it. But when I began getting to grips with this gem of a JavaScript framework, my eyes opened up to an entirely new world!

Ok, enough gushing and onto my discovery. I have a website that currently makes multiple ajax calls to retrieve a bunch of lookup lists etc. Before I can setup a grid I have on the page, which makes use of these lookup lists, I need to wait for all these calls to return before I can initialize my grid with the data. Enter JQuery’s when, then concept. This is only available from JQuery 1.5 onwards.

function getLookupListOne(){
    return $.get('lookup-list-one-url.json', function(data){
    for(var i = 0; i < data.length; i++){
        lookupListArrayOne[i] = "Some global array data you might want to initialize...";

function getLookupListTwo(){
    // Content left out for brevity

function getLookupListThree(){
    // Content left out for brevity

$(document).ready(function() {
        alert('All AJAX Methods Have Completed!');

The remote end hung up unexpectedly – Git + IntelliJ + Mac OS X

So I thought I’d try out IntelliJ’s neat little feature to import my project into a Github repo. Now I always used to do the git repo setup manually at the command line, and then commit and push my changes from the terminal too! But to speed things up, I thought I’d give this a bash. Only to be prompted with the following error in the Version Control pane in IntelliJ: There was a problem while connecting to
at com.trilead.ssh2.Connection.connect(
at com.trilead.ssh2.Connection.connect(
at org.jetbrains.git4idea.ssh.SSHMain.start(
at org.jetbrains.git4idea.ssh.SSHMain.main(
Caused by: Connection refused
at Method)
at com.trilead.ssh2.transport.TransportManager.establishConnection(
at com.trilead.ssh2.transport.TransportManager.initialize(
at com.trilead.ssh2.Connection.connect(
… 3 more
fatal: The remote end hung up unexpectedly

With a bit of searching and digging around on the net, I came across fatal: The remote end hung up unexpectedly and the obvious setup instructions on github. But the real gem that helped me out was Github (SSH) via public WIFI, port 22 blocked

I wouldn;t have originally thought that it would be because of the network I was connected to, but in desperation I resorted to looking around about ssh’ing on public networks, and eventually found this. For those who don’t feel like reading the StackOverflow question/answer, you basically just need to do the following:

Create the config

sudo nano ~/.ssh/config

Add the config

User YourEmail
Port 443

And Presto! When you push your changes from IntelliJ to Github, it’ll ask you if you trust this domain, and once you accept, all will be good with the world!

JSR303 + Spring MVC + Selective Validations

If you’ve ever been using the JSR 303 bean validation in Spring MVC, using the @Valid annotation inside your controllers, you’ve no doubt been frustrated somewhat by the lack of support for selective bean validation. What I mean by this is that in some cases(in reality most cases!) when you validate a bean, you want control over what gets validated and when.

In the past, one of the ways to overcome the @Valid running all of your validations was to either use different model objects or to whip up your own custom version of the @Valid annotation, which isn’t the cleanest way to go about it if you ask me. It’s just to open to errors.

Well thankfully, with Spring 3.1, there is now the introduction of the @Validated annotation which you can use in place of @Valid. We now have the opportunity to specify “groups” to validate. So typically, when you’re applying the bean validation to your model object, it would usually look something like this:

public class UserDTO {

@NotBlank(message = "Username must be supplied.", groups = {Default.class, MinimalUserValidation.class})
@Length(min = 6, message = "Username must be at least 6 characters long.")
private String username;

@NotBlank(message = "Password must be supplied.", groups = {Default.class, MinimalUserValidation.class})
private String password;
private String confirmPassword;

//Rest of class definition ommitted for brevity

Notice how on line 3 & 7, we have now specified the groups value, and have provided it with marker interfaces, declaring that these are the validations we’d like to have run when any one of these marker interfaces are mentioned within our @Validated annotation.

So, if we only wanted to run the MinmalUserValidation marked validations, then this is what your controller method would look like:

@RequestMapping(value="/add-user-partial.html", method = RequestMethod.POST)
public String createNewUserWithPartialValidation(@Validated(value = MinimalUserValidation.class) UserDTO userDTO, BindingResult bindingResult, Model model){
// Method body omitted for brevity

And thats all there is to it. I have a sample project on Github here if you want to check it out.

As a side note – For the curious amongst us, you’ll soon notice after digging around a bit on the constraint annotation interface (check out the javax.validation.constraints.AssertTrue for an example), you’ll see it includes a <code>payload</code> attribute. After a bit of google goodness, I came across this blog post which provided the answers I was looking for. 🙂

Android Development Part 2 – RoboGuice & Robolectric

Wow, this really is a belated post! I apologise for taking nearly 6 months to write this, but things have gotten me rather distracted over the past couple of months which hopefully I’ll share in the next few blog posts to come (I promise I won’t take another 6 more months to write those either! 🙂 )

So, as promised in my previous post, I said I’d talk about some frameworks to get you up and running quickly with building android applications. In fact there are 2 which I found invaluable. RoboGuice, and Robolectric.


If you’re familiar with dependency injection and your normal run of the mill IoC frameworks, then RoboGuice will make you feel right at home! I’d already had quite a bit of experience with Spring, so using RoboGuice was relatively straight forward. They have a very good “Getting Started” guide on their site,  which I encourage you to read if you need a little nudge in the right direction. What sealed it for me was that RoboGuice could take care of the boring stuff for me, like injecting my Views, Resources and Services which I would have normally had to painstakingly code the lookups for myself. You’ll find that without RoboGuice, you’ll be coding a lot of lines that look like this (Unashamedly plagiarized from RoboGuice’s getting started page!):

class AndroidWay extends Activity {
    TextView name;
    ImageView thumbnail;
    LocationManager loc;
    Drawable icon;
    String myName;

    public void onCreate(Bundle savedInstanceState) {
        name      = (TextView) findViewById(;
        thumbnail = (ImageView) findViewById(;
        loc       = (LocationManager) getSystemService(Activity.LOCATION_SERVICE);
        icon      = getResources().getDrawable(R.drawable.icon);
        myName    = getString(R.string.app_name);
        name.setText( "Hello, " + myName );

So as you can see, there is a fair amount of plumbing going on here, and this is just a simple view. Imagine a form input view which has a myriad of input and label fields, from drop downs, to TextViews, to Lists etc. Man I’m getting a headache just thinking how messy it could get! 🙂 So, lets face it, who really wants to have to type that line to find the view, and remember to cast it correctly,  or do that laborious resource lookup – quite frankly, its a waste of time!

So by simply using the @InjectView, @InjectResource or just plain @Inject, you can avoid all this pain quite easily, and here is a sample of the above example using RoboGuice (Unashamedly plagiarized from RoboGuice’s getting started page!):

class RoboWay extends RoboActivity {
    @InjectView(             TextView name;
    @InjectView(        ImageView thumbnail;(Unashamedly plagiarized from Roboguice's getting started page!)
    @InjectResource(R.drawable.icon)   Drawable icon;
    @InjectResource(R.string.app_name) String myName;
    @Inject                            LocationManager loc;

    public void onCreate(Bundle savedInstanceState) {
        name.setText( "Hello, " + myName );


Now with RoboGuice and all its injectable goodness, you wouldn’t be accused of wondering if this makes your testing easier… well let me put your mind at rest and tell you, that YES, it most certainly does! And what better way to test your functionality, than with an android testing framework known as Robolectric. I’ve seen in the past, quite a few approaches to testing android apps, and some of them involved actually booting up an emulator, deploying the package, and running the app with the tests. This is a very time-consuming operation, and to be honest, most of the time, we just want to test the functionality that we have written, which can’t be done without the inclusion of the activities and views along with their respective interactions. How does Robolectric achieve this? Well, to quote from their website:

Robolectric makes this possible by intercepting the loading of the Android classes and rewriting the method bodies. Robolectric re-defines Android methods so they return null (or 0, false, etc.), or if provided Robolectric will forward method calls to shadow Android objects giving the Android SDK behavior. Robolectric provides a large number of shadow objects covering much of what a typical application would need to test-drive the business logic and functionality of your application. Coverage of the SDK is improving every day.

So, what this essentially boils down to is that, Robolectric has its own Test runner(which you use with the @RunWith annotation), which will mimic your android device, by returning what effectively is a mock object for things like location services, or views, or resources etc. You can then even override these mock objects – also referred to “shadow objects” – which you can code yourself to return a predicted outcome. This allows you to test your android app from within a JVM, even inside your own IDE! No need for bootstrapping an emulator, no need for compiling and packaging it into a dex and deploying. Just write the test, and run. The beauty of this, at least for me, is that I can now run these tests via maven, and include them as part of my CI build. Job Done. You might not be able to catch form factor bugs in this manner, but I believe that trying to test for different form factors on different devices provided by different vendors actually requires a different approach which I haven’t yet had to tackle – but when I do, my solution will be up here. 🙂

Well that’s all for now. I hope this helps any of you out there thinking of writing an android app, and need help getting started. If you can get your project structure right, and use these two frameworks I’ve mentioned, you’ll be well on your well to a painless and happily filled developement life! 🙂