BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles ATDD From the Trenches

ATDD From the Trenches

Getting started with Acceptance-Test Driven Development

Have you ever been in this situation:

Then this article is for you – a concrete example of how to get started with acceptance-test driven development on an existing code base. It is part of the solution to technical debt.

This is a real-life example with warts and all, not a polished schoolbook example. So get your trench boots on. I will stay with just Java and Junit, no fancy third-party testing frameworks (which tend to be overused).

Disclaimer: I don’t claim that this is The Correct Way, there are many other “flavors” of ATDD out there. Also, there’s not much new or innovative stuff in this article, it’s just well-established practices and hard-earned experience.

What I wanted to do

A few days ago I sat down to build a password-protect feature for webwhiteboard.com (my pet project). People have long been asking for a way to password-protect their online whiteboards, so it was time to get it done.

It sounds like a simple feature, but there are lots of design decisions to be made. So far, webwhiteboard.com has been based on anonymous usage and didn’t have any kind of accounts or sign-in or password stuff. Who should be able to protect a whiteboard? Who should be able to access it? What if I forget my password? How do we keep things simple yet secure enough?

The webwhiteboard code base has decent unit test and integration test coverage. But it had no acceptance tests; that is, tests that go through an end-2-end flow from the user perspective.

Design considerations

The main design goal of web whiteboard is simplicity: to minimize the need for logins and accounts and other annoyances. So I set two design constraints for the password feature:

  • Setting a password on a whiteboard will require user authentication, but accessing a password-protected board will not. That is, a user who opens a protected whiteboard needs to enter the whiteboard password, but doesn’t need to “log in”.
  • Login will be done using a third-party OpenId/Oauth service provider, initially Google. That way, the user doesn’t have to create yet another user account.

Implementation approach

Lots of uncertainty here. I was unsure of how I wanted it to work, and even more unsure about how I wanted to implement it. So here was my approach (basically ATDD):

  • Step 1: Document the intended flow at a high level
  • Step 2: Turn it into an executable acceptance test
  • Step 3: Make the acceptance test run, but fail.
  • Step 4: Make the acceptance test succeed.
  • Step 5: Clean up the code

This is iterative, so at each step I may decide to go back and tweak a previous step (which I did very often).

Step 1: Document the intended flow

Suppose the feature is Done. An angel came and implemented it while I was asleep. Sounds too good to be true! How would I verify this? What is the first thing I would test manually? This:

  1. I create a new whiteboard
  2. I set a password on it.
  3. Joe tries to open my whiteboard, is asked to enter a password.
  4. Joe enters the wrong password and is denied access
  5. Joe tries again, enters the right password, and gets access. (“Joe” is of course just me, using another web browser…).

Already as I wrote this little test script, I realized there are lots of alternative flows to take into account. But this was the main scenario. If I can get just this to work, I’ve come far.

Step 2: Turn it into an executable acceptance test

Here’s the tricky part. I have no other end-2-end acceptance tests, so how do I even start? This feature will interact with 3rd party authentication systems (my preliminary decision was to use Janrain) as well as databases, and there’s lots of tricky web stuff involved with popup dialogs and tokens and redirects and such. Ugh.

Time to take a step back. Before solving the problem “how do I write this acceptance test” I need to solve the more basic problem “how do I write acceptance tests at all, in this code base?”

To drive this question, I tried to identify the “simplest possible feature” that I could test, something that already works today.

Step 2.1 Write the Simplest Possible executable acceptance test

Here’s what I came up with:

  1. Try to open a non-existing whiteboard
  2. Check that I didn’t get a whiteboard

How would I implement this test? Which frameworks? Which tools? Should it involve the GUI, or bypass it? Should it involve client code or talk directly with the server?

Lots of questions. The trick is: don’t answer them! Just pretend it’s all been beautifully solved somehow, and write the test as pseudocode. Here it is.


public class AcceptanceTest {
     @Test
      public void openWhiteboardThatDoesntExist() {
          //1. Try to open a non-existing whiteboard
         //2. Check that I didn't get a whiteboard
      }
}

I run it, and it succeeds! Hurray! Er, no wait, that’s wrong! The first step in the TDD triangle (“Red-Green-Refactor”) is Red. So I need to make it fail, to prove that the feature needs to be built.

I better get on with writing some real test code. But nevertheless, the pseudocode got me moving in the right direction.

Step 2.2 Make the Simplest Possible acceptance test Red.

To make this test real, I make up a class called AcceptanceTestClient, and I pretend it has magically solved all the questions and gives me a beautiful, high-level API for running my acceptance test. Using it is as simple as this:

     client.openWhiteboard("xyz");
     assertFalse(client.hasWhiteboard());

As I write that code, I’m essentially inventing an API that suits the exact needs of this test case. It should be about as many lines of code as the pseudocode.

Next, I use shortcut keys in Eclipse to have it auto-generate an empty version of AcceptanceTestClient and the methods I need:


public class AcceptanceTestClient {
        public void openWhiteboard(String string) {
               // TODO Auto-generated method stub
        }

        public boolean hasWhiteboard() {
               // TODO Auto-generated method stub
               return false;
        }
}

Here is the complete test class now:


public class AcceptanceTest {
  AcceptanceTestClient client;

  @Test
  public void openWhiteboardThatDoesntExist() {
    //1. Try to open a non-existing whiteboard
    client.openWhiteboard("xyz");

    //2. Check that I didn't get a whiteboard
    assertFalse(client.hasWhiteboard());
  }
}

This test runs, but fails (because client is null). Good!

What have I solved? Not much. But it’s a start. I have the beginnings of an acceptance test helper class, the AcceptanceTestClient.

Step 2.3. Make the Simplest Possible acceptance test Green

Next step is to make the acceptance test green.

Note that I have a much simpler problem to solve now. I don’t have to worry about authentication and multiple users and stuff like that. I can add tests for that later.

As for the AcceptanceTestClient, the implementation was pretty standard – mock out the database (I had code for that already) and run an in-memory version of the whole webwhiteboard system.

Here’s the production setup:

(Click on the image to enlarge it)

Techy details: Web Whiteboard uses GWT (Google Web Toolkit). Everything is written in Java, but GWT automatically translates the client-side code to javascript, and inserts the RPC magic (Remote Procedure Calls) to encapsulate all the dirty details of asynchronous client-server communication.

In the acceptance test setup, I “short circuit” the system and cut out all frameworks, 3rd party services, and network communication.

(Click on the image to enlarge it)

So I create an AcceptanceTest client that talks to the web whiteboard service in the same way that the real client does. The difference is behind the curtains.

  • The real client talks to the web whiteboard service interface, and this is running in a GWT environment which automatically turns requests into RPC calls and relays to the server.
  • The acceptance test client also talks to the web whiteboard service interface, but this one is directly connected to a local service implementation, no need for RPC and, hence, no need for GWT while running the tests.

Also, in the acceptance test configuration it replaces the mongo database (a cloud-based NoSQL database) with a fake in-memory database.

The reason for all this faking is to simplify the environment, make the tests run faster, and make sure the tests are testing the business logic isolated from all the framework and network stuff.

This may sound like a complicated setup, but in fact it’s pretty much just an init method with 3 lines.


public class AcceptanceTest {
   AcceptanceTestClient client;

   @Before
   public void initClient() {
     WhiteboardStorage fakeStorage = new FakeWhiteboardStorage();
     WhiteboardService service = new WhiteboardServiceImpl(fakeStorage);
     client = new AcceptanceTestClient(service);
   }

   @Test
    public void openWhiteboardThatDoesntExist() {
     client.openWhiteboard("xyz");
     assertFalse(client.hasWhiteboard());
   }
}

WhiteboardServiceImpl is the existing server-side implementation of the web whiteboard system.

Note that the AcceptanceTestClient now accepts a WhiteboardService instance in it’s contructor (a pattern knows as “dependency injection”). This gives us a bonus side-effect: it doesn’t care about the configuration. The same unmodified AcceptanceTestClient class can be used to test against the live environment by just sending in a live-configured instance of WhiteboardService.


public class AcceptanceTestClient {
 private final WhiteboardService service;
 private WhiteboardEnvelope envelope;

  public AcceptanceTestClient(WhiteboardService service) {
  this.service = service;
 }

 public void openWhiteboard(String whiteboardId) {
  boolean createIfMissing = false;
  this.envelope = service.getWhiteboard(whiteboardId, createIfMissing);
 }

 public boolean hasWhiteboard() {
  return envelope != null;
 }
}

So in summary, the AcceptanceTestClient mimics what the real web whiteboard client does, while providing a high-level API towards the acceptance tests.

You might be wondering “why do we need an AcceptanceTestClient when we already have a WhiteboardService we could talk to directly?”. There’s 2 reasons:

  1. The WhiteboardService API is more low-level. AcceptanceTestClient implements exactly the methods needed by the acceptance tests, and in exactly the way that will make them as easy-to-read as possible.
  2. AcceptanceTestClient hides stuff that the test code doesn’t need, for example the concept of WhiteboardEnvelope, the createIfMissing boolean, and other lower-level details. In reality there’s more services involved too, such as a UserService and WhiteboardSyncService.

I’m not going to bore you with more details on the AcceptanceTestClient code, since this article isn’t about the internal plumbing of web whiteboard. Suffice to say, AcceptanceTestClient maps the needs of the acceptance tests to the lower level details of interacting with the whiteboard service interfaces. This was easy to implement, since the real client code effectively serves as a how-do-I-interact-with-the-service tutorial.

Anyway, now our Simplest Possible acceptance test passes!


  @Test
  public void openWhiteboardThatDoesntExist() {
   myClient.openWhiteboard("xyz");
   assertFalse(myClient.hasWhiteboard());
 }

Next step is to clean things up a bit.

Actually I didn’t write any production code for this (since the feature already exists and works), it was just test framework code. But nevertheless I spent a few minutes cleaning that up, removing duplication, making method names more clear, etc.

Finally I add one more test, just for the sake of completeness, and because it’s so easy :o)


  @Test
  public void createNewWhiteboard() {
    client.createNewWhiteboard();
    assertTrue(client.hasWhiteboard());
  }

Hurray, we have a test framework! And we didn’t even need any fancy third party libraries for it. Just Java and Junit.

Step 2.4 write the acceptance test code for the Password Protect feature

Now it’s time to add the test for my password protection feature

I start by copying in my original test “spec” as pseudocode:


  @Test
  public void passwordProtect() {
    //1. I create a new whiteboard
    //2. I set a password on it.
    //3. Joe tries to open my whiteboard, is asked to enter a password.
    //4. Joe enters the wrong password and is denied access
    //5. Joe tries again, enters the right password, and gets access.
  }

And now, again, I write the test code while pretending that AcceptanceTestClient has everything I need, in exactly the way I need it. I find this technique immensely useful.


  @Test
  public void passwordProtect() {
    //1. I create a new whiteboard
    myClient.createNewWhiteboard();
    String whiteboardId = myClient.getCurrentWhiteboardId();

    //2. I set a password on it.
    myClient.protectWhiteboard("bigsecret");

    //3. Joe tries to open my whiteboard, is asked to enter a password.
    try {
       joesClient.openWhiteboard(whiteboardId);
      fail("Expected WhiteboardProtectedException");
    } catch (WhiteboardProtectedException err) {
      //Good
    }
    assertFalse(joesClient.hasWhiteboard());

    //4. Joe enters the wrong password and is denied access
    try {
      joesClient.openProtectedWhiteboard(whiteboardId, "wildguess");
      fail("Expected WhiteboardProtectedException");
    } catch (WhiteboardProtectedException err) {
      //Good
    }
    assertFalse(joesClient.hasWhiteboard());

    //5. Joe tries again, enters the right password, and gets access.
    joesClient.openProtectedWhiteboard(whiteboardId, "bigsecret");
    assertTrue(joesClient.hasWhiteboard());
  }

This code took just a few minutes to write, because I could just make up things as I went along. Almost none of these methods actually exist in AcceptanceTestClient (yet).

As I wrote this code, I had to make a number of design decisions. No need to think too hard, just do the first thing that comes to mind. Perfect is the enemy of good enough, and right now I want just good enough, which means a runnable test that fails. Later, when the test runs and is green, I will refactor and think harder about the design.

It’s very tempting to start cleaning up the test code now, especially refactoring out those ugly try/catch statements. But part of the discipline of TDD is to get to green before you start refactoring, the tests will protect you as you refactor. So I decided to wait with the cleanup.

Step 3 – make the acceptance test run, but fail

Following the testing triangle, next step is to make it run but fail.

Again, I use Eclipse shortcuts to have it create empty versions of all the missing methods. Very nice. Run the test and, voila, we have Red!

Step 4: Make the acceptance test green

Now I’ve got a bunch of production code to write. I’m adding several new concepts to the system. As I do this, some of the code I add is non-trivial, so it needs to be unit-tested. I do that using TDD. Same as ATDD, but on a smaller scale.

Here’s how ATDD and TDD fit together. Think of ATDD as an outer cycle:

For each loop around the acceptance test cycle (at a feature level), we do multiple loops of the unit test cycle (at class & method levels).

So although my high-level focus is getting the acceptance test to Green (which can take a few hours), my low-level focus is for example getting the next unit test to Red (which usually takes just a few minutes).

This isn’t hardcore “Leather & Whip TDD”. It’s more like “at least make sure that unit tests & production code are in the same commit”. And that commits happen several times per hour. Might call that TDD-ish :o)

Step 5 clean up the code

As usual, once the acceptance test is green, it’s cleanup time. Never skimp on that step! It’s like doing the dishes after having a meal – it’s quickest to do it right away.

I clean up not only the production code, but the test code as well. For example I extract the messy try-catch stuff into a helper method, and end up with this nice and clean test method:


  @Test
  public void passwordProtect() {
    myClient.createNewWhiteboard();
    String whiteboardId = myClient.getCurrentWhiteboardId();

    myClient.protectWhiteboard("bigsecret");

    assertCantOpenWhiteboard(joesClient, whiteboardId);

    assertCantOpenWhiteboard(joesClient, whiteboardId, "wildguess");

    joesClient.openProtectedWhiteboard(whiteboardId, "bigsecret");
    assertTrue(joesClient.hasWhiteboard());
  }

My goal is to make the acceptance test so short & clean & easy-to-ready that comments are redundant. The original pseudo-code/comments act as a template – “here’s how clear I want this code to be!”. Removing the comments gives a sense of victory, and as a positive side-effect makes the method even shorter!

What next?

Rinse and repeat. Once I had the first test case working, I thought about what’s missing. For example, I said password protection should require user authentication. So I added a test for that, made it red, made it green, and cleaned up. And so on.

Here is a full list of tests that I’ve created for this feature (so far):

  • passwordProtectionRequiresAuthentication()
  • protectWhiteboard
  • passwordOwnerDoesntHaveToKnowThePassword
  • changePassword
  • removePassword
  • whiteboardPasswordCanOnlyBeChangedByThePersonWhoSetIt

I’ll most certainly add more tests later, as I discover bugs or add new features.

All in all, this was about 2 days of effective coding. Much of it was going back and reiterating on the code and design, not as linear as it might seem in this article.

What about manual testing?

I did lots of manual testing as well, after the automated tests were green. But since the automated tests cover the basic functionality and many of the edge cases, I could focus the manual testing on more subjective and exploratory stuff. How is the high level user experience? Does the flow make sense? Is it understandable? Where do I need to add help texts? Is the design aesthetically acceptable? I’m not trying to win any design awards, but I don’t want something monumentally ugly either.

A strong suite of automated acceptance tests removes the need for boring, repetitive manual testing (aka “monkey testing”), and frees up time for the more interesting and valuable type of manual testing.

Ideally I should have built automated acceptance tests from the beginning, so part of this is really just me paying off some technical debt.

Key take-away points

There, I hope this example was useful to you! It demonstrates a pretty typical situation – “I’m about to build a new feature, and it would be nice to write an automated acceptance test, but I don’t have any so far and I don’t know what frameworks to use or even how to get started”.

I really like this pattern, it’s gotten me unstuck so many times. In summary:

  1. Pretend that you have an awesome framework encapsulated behind a really convenient helper class (in my case AcceptanceTestClient).
  2. Write a very simple acceptance test for something that already works today (like just opening your application). Use it drive your implementation of AcceptanceTestClient and associated test configuration (such as faking connection to databases and other external services).
  3. Write the acceptance test for your new feature. Make it run but fail.
  4. Make it green. While coding, write unit tests for any non-trival stuff.
  5. Refactor. And maybe write some more unit tests for good measure, or remove redundant ones. Keep the code sqeaky clean!

Once you’ve done this, you’ve crossed the most difficult threshold. You’re up and running with ATDD!

About the Author

Henrik Kniberg is an Agile/Lean coach at Crisp in Stockholm, working primarily with Spotify. He enjoys helping companies succeed with both the technical and human sides of software development, as described in his popular books “Scrum and XP from the Trenches” and “Kanban and Scrum, making the most of both” and “Lean from the Trenches“.

Rate this Article

Adoption
Style

BT