Having good tests in place is absolutely critical for ensuring a stable, maintainable codebase. Hopefully that doesn’t need any more explanation.
However, what defines a “good” test is not always obvious, and there are a lot of common pitfalls that can easily shoot your test suite in the foot.
If you already know everything about testing but are fed up with trying to debug why a specific test failed, you can skip the intro and jump straight to Debugging Unit Tests.
There are three main types of tests, each with their associated pros and cons:
These are isolated, stand-alone tests with no external dependencies. They are written from the perspective of “knowing the code”, and test the assumptions of the codebase and the developer.
Pros:
Cons:
These are generally also isolated tests, though sometimes they may interact with other services running locally. The key difference between functional tests and unit tests, however, is that functional tests are written from the perspective of the user (who knows nothing about the code) and only knows what they put in and what they get back. Essentially this is a higher-level testing of “does the result match the spec?”.
Pros:
Cons:
This layer of testing involves testing all of the components that your codebase interacts with or relies on in conjunction. This is equivalent to “live” testing, but in a repeatable manner.
Pros:
Cons:
A few simple guidelines:
Limiting our focus just to unit tests, there are a number of things you can do to make your unit tests as useful, maintainable, and unburdensome as possible.
Use a single, consistent set of test data. Grow it over time, but do everything you can not to fragment it. It quickly becomes unmaintainable and perniciously out-of-sync with reality.
Make your test data as accurate to reality as possible. Supply all the attributes of an object, provide objects in all the various states you may want to test.
If you do the first suggestion above first it makes the second one far less painful. Write once, use everywhere.
To make your life even easier, if your codebase doesn’t have a built-in ORM-like function to manage your test data you can consider building (or borrowing) one yourself. Being able to do simple retrieval queries on your test data is incredibly valuable.
Mocking is the practice of providing stand-ins for objects or pieces of code you don’t need to test. While convenient, they should be used with extreme caution.
Why? Because overuse of mocks can rapidly land you in a situation where you’re not testing any real code. All you’ve done is verified that your mocking framework returns what you tell it to. This problem can be very tricky to recognize, since you may be mocking things in setUp methods, other modules, etc.
A good rule of thumb is to mock as close to the source as possible. If you have a function call that calls an external API in a view , mock out the external API, not the whole function. If you mock the whole function you’ve suddenly lost test coverage for an entire chunk of code inside your codebase. Cut the ties cleanly right where your system ends and the external world begins.
Similarly, don’t mock return values when you could construct a real return value of the correct type with the correct attributes. You’re just adding another point of potential failure by exercising your mocking framework instead of real code. Following the suggestions for testing above will make this a lot less burdensome.
Think long and hard about what you really want to verify in your unit test. In particular, think about what custom logic your code executes.
A common pitfall is to take a known test object, pass it through your code, and then verify the properties of that object on the output. This is all well and good, except if you’re verifying properties that were untouched by your code. What you want to check are the pieces that were changed, added, or removed. Don’t check the object’s id attribute unless you have reason to suspect it’s not the object you started with. But if you added a new attribute to it, be damn sure you verify that came out right.
It’s also very common to avoid testing things you really care about because it’s more difficult. Verifying that the proper messages were displayed to the user after an action, testing for form errors, making sure exception handling is tested... these types of things aren’t always easy, but they’re extremely necessary.
To that end, Horizon includes several custom assertions to make these tasks easier. assertNoFormErrors(), assertMessageCount(), and assertNoMessages() all exist for exactly these purposes. Moreover, they provide useful output when things go wrong so you’re not left scratching your head wondering why your view test didn’t redirect as expected when you posted a form.
There are a number of typical (and non-obvious) ways to break the unit tests. Some common things to look for:
Horizon uses mox as its mocking framework of choice, and while it offers many nice features, its output when a test fails can be quite mysterious.
This occurs when you stubbed out a piece of code, and it was subsequently called in a way that you didn’t specify it would be. There are two reasons this tends to come up:
This one is the opposite of the unexpected method call. This one means you told mox to expect a call and it didn’t happen. This is almost always the result of an error in the conditions of the test. Using the assertNoFormErrors() and assertMessageCount() will make it readily apparent what the problem is in the majority of cases. If not, then use pdb and start interrupting the code flow to see where things are getting off track.