Saturday 24 April 2010

Moving To My Site

So, given that:

1. Visual C# Express
2. XNA
3. NUnit
4. Rhino.Mocks
5. Codeplex, and
6. Mercurial

are all free, I've decided to create a new site (continuing the blog) dedicated to free tools and XNGen development. I've spent the last couple of days making some improved TDD demos and re-writing the tutorials for the site. I'm going to be posting all the code on Codeplex (using Mercurial revisions to create TDD 'slideshows'). As soon as I've finished my new tutorials on how to use Codeplex and Mercurial, I'll post a link to the new site here.

Check back in the next few days.

Wednesday 21 April 2010

NUnit Part 5: Mocks, Fakes and Stubs

So we've finally reached the penultimate post in the NUnit series. So far I've discussed

1. The importance of TDD and unit testing,
2. Introduced a free C# testing framework (NUnit),
3. Shown how to simplify tests using set up/tear down code, and
4. Discussed the importance of interfaces and dependency injection.

This is actually the last post on coding tests, as the final post demonstrates how to add a shortcut to run our tests to Visual C# Express. So what's left? Well as the title and frequent references in the last post might have given away, the wonderful world of mocking.

We saw in the last post that we should always supply any objects a class might need via its constructor (or some other method), rather than have the class make them itself. But that means that in order to test a class, we might need to pass in multiple objects which in turn would require multiple objects and so on. So, to save writing a whole heap of extra code, we create alternate versions of the required objects.

If we want to test an object based on another objects response, we can create a stub/fake. This is a greatly simplified alternate version that returns preset responses rather than performing calculations. A good example would be if a class requires a random number generator. We can create a stub that returns the values we require for our tests.

Alternatively, if we want to test how objects interact, we can setup some expectations. Typically these take the form of:

1. Requiring that a method is called with certain arguments, or
2. Requiring a method is called a certain number of times (possibly zero).

At the end of our test, we can then verify (see Assert) that the expectations were met. In these cases, we should use a mock. These involve a lot more code than creating a simple stub. But isn't that a "whole heap of extra code". Well, that depends. Not if we use some automated mocking tools!

There are a number of such tools available for free. NUnit includes a dll for mocking, but you should also check out Rhino.Mocks and Moq. We'll be focusing on NUnit, but the others are both excellent. I personally prefer Rhino.Mocks as I find it slightly more powerful, but the choice is up to you.

So how do we create mock objects with NUnit?

Still a work in progress...

Saturday 17 April 2010

NUnit Part 4: Interfaces & Dependency Injection

At the end of my last post, I mentioned that we can test a classes behaviour using mocks, stubs and fakes. This is a huge help when testing code. But before we can use them we need to review a couple of (closely related) coding practices - interfaces and dependency injection.

In my experience, interfaces are easily one of the most overlooked aspects of C#. Many see them (as I once did, when I first learned Java) as rather pointless extra code. How wrong they are! Firstly, by designing to an interface, we are already creating a kind of test. We are guaranteeing that any code that implements the interface will provide all of its methods/properties. If they don't, we'll get compile time errors (which is a Good Thing&trade).

Another benefit of interfaces is a consequence of dependency injection (DI). Before I continue, a quick question: how often do your constructors do work (call methods/properties) on their arguments? Chances are your answer is "most of the time". But what's wrong with this? Firstly, it prevents us from using some of the most powerful tools we have when testing (see the next post). Secondly, we have no way of knowing what the constructor is actually using (and so we can create hidden dependancies - a Bad Thing&trade). Before you start complaining that passing in all our objects breaks encapsulation, consider the following constructors:


1. Foo(Bar bar1, Bar bar2, string name)...
2. Foo(FooSettings settings)
FooSettings(Bar bar1, Bar bar2, string name)


In the first instance, we can clearly see that Foo uses two Bars and a string. In the second we have no direct knowledge of the content - just that it takes an intermediate class from which it extracts what it might need. The FooSettings class is essentially redundant code, which introduces more room for bugs. DI means passing in all the objects we will need (we 'inject' what it 'depends on'). No big mystery to solve. The main advantage is we reduce coupling between classes, thus reducing the chance that an edit in one class will result in edits in many others. But what does this have to do with interfaces?

While a class in C# can only have one parent class, it can inherit from multiple interfaces. This means that it can be cast to any of the interfaces it implements. To understand this, you need to know a little about reference types. Classes and interfaces are both examples. Reference types differ from value types in how we store them in memory. When we call a reference types constructor, we actually create two different blocks of our RAM (if you want to get technical, one is on the heap while the other is on the stack). The first is the actual object - with all the state data it needs to work. The second is a pointer to the first block. It says where the state data is and what its type is. Thus, what methods can be run on the block. When we pass a reference type to a method, we simply create a new pointer block with its associated type. If, in the process, it is cast to another type, the new pointers type is changed appropriately. What does this mean and how does this help us? Time for a simple example.

Suppose we are making a dungeons and dragons style game. Every once in a while (at the end of each dungeon) we'll encounter some sort of boss. Lets call it a... hmm.... dragon? That'll do! Rather than creating a new dragon for each dungeon, we'll reuse the same instance over and over, changing its state as required (hit points, attack type, texture, name...). But we should restrict where its state can be set, to prevent it from accidentally being repeatedly given max health (or any other kind of unpleasant side effects).

To do this, we'll create a dungeon factory that exists for the lifetime of the game. Whenever we want a new dungeon, we'll ask the factory for one (supplying whatever arguments we need by DI). It will then create the dungeons layout, assign basic monsters etc... The factory will also store the single instance of the dragon class, which implements the IDragon interface. The factory can then call any reset methods (which aren't in the interface) on the dragon before passing a reference of it to the new dungeon. To prevent the dungeon from calling the reset methods, it stores an instance of the IDragon interface instead. Thus, even though the dungeon and the factory are pointing at the same object, the dungeon only know how to use the dragon - not reset it. Incidentally, we could apply the same principle to any of the dungeons objects - rooms, monsters, treasure etc...

If you're still confused you'll see this in action in a few posts time when we create our new keyboard class and interface for XNGen's input system.

To summarise, by implicitly casting a class to one of its interfaces when passing it as an argument, we can reduce the chance of errors occurring. But there is an additional benefit with regards to testing - creating mocks, stubs and fakes.

To be continued next post!

Friday 16 April 2010

NUnit Part 3: Exception Handling

This is going to be very short (at least compared to the last couple of posts.) So far we have seen how to write a some basic tests using assertions and assign set up/clean up methods. But what if we want to check that a call throws an exception? We could wrap our test in a try-catch-finally block. We would have to assert:

1. That the expected exception was thrown (test in the catch block), and
2. That an exception was thrown at all (test in the finally block)

That's an awful lot of extra code to write. Tests are meant to be small. Fortunately we have a way round this - we just add the attribute [ExpectedException()].

A perfect example would be checking that calling the divide method of our calculator with a zero argument throws a DivideByZeroException. As with our earlier tests, this is really easy:



using System;
using NUnit.Framework;

namespace WindowsGame1
{
[TestFixture]
public class Calculator_Tests
{

// Omitted earlier code...

[Test]
[ExpectedException(typeof(DivideByZeroException))]
public void Divide_By_Zero_Should_Throw_Exception()
{
Calculator.DivideBy( 0 );
}
}
}



No assertions. No try-catch-finally blocks. Just one line of code (well, two if you count the attribute). Note we added the System namespace to our test class, to simplify the use of exceptions. As long as the expected exception is thrown, the test will pass. If a different one is thrown, it will fail. Likewise, if no exception is thrown it will fail.

We now have the basics in place to test boundary conditions as well. But our tests are still very basic. In order to really get to grips with testing, we would like to examine what a method is actually doing internally (both to its arguments and its own state objects). We solve this with stubs, fakes and mocks. But before we can get to that, we'll need to introduce two important coding practices:

1. Design to an interface.
2. Dependency Injection

Stay tuned!

NUnit Part 2: Setup Methods

In my previous post I gave a very brief introduction to using NUnit. Unfortunately, our current approach requires us to write quite a lot of repetitive setup code at the start of each tests. In our examples this is limited to creating our calculator and setting its initial value. But when we introduce Dependency Injection in a couple of posts time, we'll see that this setup code can become quite involved.

Fortunately, NUnit provides us with four very useful attributes we can apply to methods to simplify this process:

1. [TestFixtureSetUp]
2. [TestFixtureTearDown]
3. [SetUp]
4. [TearDown]

The first two are [TestFixtureSetUp] and [TestFixtureTearDown]. We attach these to methods that we want to be called once - one at the start of all the tests, the other after all the tests. This way we can create our persistent objects (for instance, the calculator) and also safely delete them (for instance, if they have some code that should be called before they go out of scope).

The second two should be attached to methods that are called before and after each test. this is where we can create/destroy temporary objects and reset the state of permanent ones.

Time to put this into practice. We're going to extend our calculator class, so that it can also subtract and multiply (we'll get to division in the next post). Rather than walk you through essentially a repetition of the last post, here's the code for the calculator:



namespace WindowsGame1
{
public class Calculator
{
public Calculator()
{
this.CurrentValue = 0;
}

public void Add( int aNumber )
{
this.CurrentValue += aNumber;
}
public void Subtract( int aNumber )
{
this.CurrentValue -= aNumber;
}
public void Multiply( int aNumber )
{
this.CurrentValue *= aNumber;
}

public int CurrentValue
{
get;
set;
}
}
}



and the tests:



using NUnit.Framework;

namespace WindowsGame1
{
[TestFixture]
public class Calculator_Tests
{
[Test]
public void Should_Add_Integers()
{
Calculator calculator = new Calculator();
calculator.CurrentValue = 1;

calculator.Add( 2 );
Assert.AreEqual( 3 , calculator.CurrentValue );

calculator.Add( 3 );
Assert.AreEqual( 6 , calculator.CurrentValue );

calculator.Add( -4 );
Assert.AreEqual( 2 , calculator.CurrentValue );
}

[Test]
public void Should_Subtract_Integers()
{
Calculator calculator = new Calculator();
calculator.CurrentValue = 1;

calculator.Subtract( 2 );
Assert.AreEqual( -1 , calculator.CurrentValue );

calculator.Subtract( 3 );
Assert.AreEqual( -4 , calculator.CurrentValue );

calculator.Subtract( -4 );
Assert.AreEqual( 0 , calculator.CurrentValue );
}

[Test]
public void Should_Multiply_Integers()
{
Calculator calculator = new Calculator();
calculator.CurrentValue = 1;

calculator.Multiply( 2 );
Assert.AreEqual( 2 , calculator.CurrentValue );

calculator.Multiply( 3 );
Assert.AreEqual( 6 , calculator.CurrentValue );

calculator.Multiply( -4 );
Assert.AreEqual( -24 , calculator.CurrentValue );
}
}
}



We can simplify our tests (slightly) by extracting the setup code:



using NUnit.Framework;

namespace WindowsGame1
{
[TestFixture]
public class Calculator_Tests
{
[TestFixtureSetUp]
public void TestFixtureSetup()
{
Calculator = new Calculator();
}

[SetUp]
public void PreTest()
{
Calculator.CurrentValue = 1;
}

[Test]
public void Should_Add_Integers()
{
Calculator.Add( 2 );
Assert.AreEqual( 3 , Calculator.CurrentValue );

Calculator.Add( 3 );
Assert.AreEqual( 6 , Calculator.CurrentValue );

Calculator.Add( -4 );
Assert.AreEqual( 2 , Calculator.CurrentValue );
}

[Test]
public void Should_Subtract_Integers()
{
Calculator.Subtract( 2 );
Assert.AreEqual( -1 , Calculator.CurrentValue );

Calculator.Subtract( 3 );
Assert.AreEqual( -4 , Calculator.CurrentValue );

Calculator.Subtract( -4 );
Assert.AreEqual( 0 , Calculator.CurrentValue );
}

[Test]
public void Should_Multiply_Integers()
{
Calculator.Multiply( 2 );
Assert.AreEqual( 2 , Calculator.CurrentValue );

Calculator.Multiply( 3 );
Assert.AreEqual( 6 , Calculator.CurrentValue );

Calculator.Multiply( -4 );
Assert.AreEqual( -24 , Calculator.CurrentValue );
}

private Calculator Calculator
{
get;
set;
}
}
}



Granted, our example doesn't really call for the use of setup methods. But future examples will. In particular, the first chapter on XNGen's creation - the input system - which we will start in 5 posts time. In the meantime, remember that it's not just our product code that we can refactor, but our tests too.

Thursday 15 April 2010

NUnit Part 1: A Testing Framework For C#

So if you've read my last post (or if you've come across testing before) you'll hopefully be sold on TDD and Unit Testing. But how do we actually go about testing code in C#?

Most coders use a version of the XUnit framework to test their code. For C# this means NUnit. Simply download the installer, run it and add a reference to it in your project. In Visual C# Express, simply right click on your project, add a reference and browse to the NUnit\bin\net-2.0\framework folder. Add the nunit.framework dll and we're done (you wont need the other dlls until later). That was easy!

Now that our project knows where to find NUnit, we can write our first tests. For this example, we're going to start making a simple calculator in C#. Add a new class to your project called "Calculator_Tests". When I test a class called "X" I like to call the testing class "X_Tests". Normally I try to avoid using underscores in names, but I've grudgingly accepted the wisdom of this approach (we'll see why when we run our tests). Add the following to the class:



You'll need to replace WindowsGame1 with your projects name. The using statement lets the class know about NUnit. The [TestFixture] attribute tells NUnit that the class contains tests. Similarly, the [Test] attribute tells NUnit that it should run the following method. The Assert class is part of NUnit and is how we test results. It has many methods (AreSame checks if two references are actually to the same object, AreEquals checks their equality, IsTrue expects true statements etc...) The test passes if all its asserts pass. On the other hand, if even one assert fails so does the test.

Note the naming convention for the test. Once again we are using underscores. This is simply to make the results of our tests easier to read and is by no means required.

Now try to compile the project. Obviously we get some errors as we haven't written the Calculator class yet. But compile time errors are a Good Thing&trade. This way, we don't accidentally ship something that's broken. So lets add the minimum amount of code required to compile:



Again, you'll need to rename WindowsGame1 to your project name. Hit compile and we're done. Right? No? But according to the compiler our code is fine? Obviously our add method is wrong. To proove it lets run our test. Open up the NUnit gui and load our project (it should be in our projects bin folder - probably bin\x86\Debug\). Hit run and you'll see a big red cross next to our test name (see why the odd naming convention was recommended?) Hmm... lets fix that! Change our calculator class to:



Here we've demonstrated a key point about TDD - always do the simplest thing required to pass a test. Sure we'll miss some important boundary testing (null values etc...) but we can cover all that in our unit tests. For now, lets just get the code working!
Compile and run our test. Green light! So we're finished right? Well... not quite. We've still not done our refactoring yet. The problem is the unnamed constant (3). Unnamed constants are a Bad Thing&trade. Coders will often talk about bad smells - that is code that appears to work, but doesn't feel right - and an unnamed constant really stinks. But at least we have a fall back solution that we know makes our tests works. Lets try the (hopefully obvious) solution:



Compile, test and green light. OK, so we've not checked our boundry cases (for instance calculator.Add( int.MaxValue )) but for now we're good. We can improove our confidence in our method by adding to our smoke test:



Now we can delete the commented out line from the Add method. And we're finally done. Hopefully you've seen that tests are easy to implement (if a little pointless in this case) and that TDD can be quite painless. In the next example I'll show you how to reduce the amount of test code you have to write using the setup and teardown methods. Stay tuned.

Test Driven Development & Unit Testing

Before we can properly get under way, I'm going to give a brief introduction you to Test Driven Development" (TDD) and Unit Testing.

The last couple of decades have seen the rise of TDD. You may have come across project management terms like XP or Agile, but may not have looked into them. Whilst there is much more to these practices, at their core lies the testing process. This phenomenally useful approach greatly reduces debugging time, whilst also giving the programmer confidence that their code is operating as it should. It's also a lot more fun, as you're constantly setting yourself little challenges to overcome.

Whenever you write any code there are many types of test you could use, but two in particular are recommended. The first are tests that aid your development. These are written before the code they relate to and ensure that what you are about to write actually does what you want it to. The second are unit tests. These are written once you have finished writing a class and are used to give confidence that your code is behaving as it should under all conditions.

But what are the benefits? Well, when we write tests before our code it means that we only need to code the minimum amount to satisfy our requirements. It's all too easy (especially in games) to become sidetracked into adding in lots of functionality we don't need (see "Y.A.G.N.I."). TDD helps reduce this. Furthermore, any time we want to refactor (rewrite) our code we can be sure that our changes don't break anything by simply running our tests again. This encourages very short cycles (often only a couple of minutes long) in which we:

1. Write a test that fails.
2. Write just enough code to make it pass.
3. Refactor as needed.

At first this may seem slower - after all we are now writing roughly twice as much code. The key is to remember that our tests are very simple to write, whilst bugs can often be a real pain to find. They also encourage us to spend more time refactoring as we go along (rather than trying to clean up a big mess every few hours) making our code clearer and easier to maintain/edit.

Similarly, unit tests are used to cover our backs. The tests we write as we develop are often what we call "Smoke Tests". These simply demonstrate the the code works as intended in very typical cases. But what about unusual cases? These are referred to as "Boundry Tests" and can be vitally important (we'll see an example in a minute). Even if we include some boundry tests in our development process, the chances are we'll have missed some (or a lot in my case). We can also write automated tests, which supply random inputs to our methods and are set to run many iterations (say a few billion overnight). This way we gain a lot of confidence that once our code ships, it won't start randomly throwing up bugs.

As a simple example, consider Binary Search. This is a widely known recursive algorithm where, given a sorted array, we can quickly find a value or show that it is not present. The idea is simple. At each iteration we have two positions (the upper and lower bounds). If the lower bound is greater than or equal to the higher bound we stop. Otherwise we calculate their mid point and look at the value stored in the array at that point. If it's less than the target, we set the lower bound to the mid point and repeat the process. Similarly, if higher than the target, we set the upper bound to the mid point. It sounds pretty straight forward. Yet it took 12 years before a version was written that was widely believed to be bug free. Even then, the solution had a problem. It all comes down to the mid point calculation. The obvious approach is:

mid = (lower + upper) / 2;

Unfortunately, as boundry testing revealed, this can cause a problem if the sum of lower and upper is greater than the largest number representable by an int. The solution is to use:

mid = lower + ((higher - lower) / 2);

For a much deeper examination of this example (including many suggested unit tests) see Alberto Savoia's excellent chapter "Beautiful Tests" in "Beautiful Code".

The possibility that such 'invisible' bugs can creep into seemingly correct code is the best argument I can give for the use of tests.

Overview

After several false starts, I am finally launching myself into a new series on engine design in XNA.

The intent is to demonstrate how an engine can be built, whilst demonstrating why XNA itself is not an engine (a common misconception for beginners) and introducing a number of useful tools (NUnit, SVN...) and practices (TDD, design patterns...). This is intended for people who already have some grasp of C# but not necessarily XNA.

For those of you who are new to XNA can I also strongly suggest following Shawn Hargreaves excellent blog at: http://blogs.msdn.com/shawnhar/