Brenton Cleeland

Types of Testing You Should Care About: Integration Testing

Published on

Integration tests traverse multiple components or layers of your application during execution. With modern tooling they allow us to efficiently test large parts of our codebase to verify it's behaviour.

Write Tests. Not too many. Mostly integration

The term integration testing covers a broad range of tests. The best definition is probably a recursive one: larger than a unit test, smaller than an end-to-end test.

The depth of an integration test is determined by the behaviour you are testing. You could use a mocking framework to mock out calls to third-party services, your own database or even other parts of your own code. Other test might behave in a way that includes those types of dependencies. Some frameworks make this easier than others.

We normally find ourselves focussing our integration testing effort on the public interfaces of the code we write. For web developers this will mean the routes, controllers or views that a client will call. If you're writing an IDE plugin that will be the hooks that the IDE will call. For an app it may be the UI elements that the user will interact with.

Integration tests should be fast enough that you can run them locally on your machine regularly as part of your development process. You should focus on integration tests that give you a fast feedback cycle.

You will likely find yourself using the same "unit testing" framework to run both unit tests and integration tests. For many projects there is no often distinction between unit and integration tests when they are run. Sometimes you will see them separated so they can be ran at different frequencies or with different tooling.

Let's dive into an example. This is a view from a Django app that takes a user's response, saves it to the database, and returns a redirect to the results page.

The code for that view is here:

def assessment(request, assessmemt_slug):
    assessment_obj = get_object_or_404(Assessment, slug=assessment_slug)

    if request.method == "POST":
        current = {}
        desired = {}

        user_name = request.POST.get("name")
        max_value = len(assessment_obj.assessment["ratings"])

        for name, category in assessment_obj.assessment["categories"]:
            current[category] = clamp(int(request.POST.get(category, "1")), 1, max_value)

            if request.POST.get(f"{category}-levelup"):
                desired[category] = clamp(current[category] + 1, 1, max_value)
            else:
                desired[category] = current[category]

        ar = AssessmentResult.objects.create(
            name=user_name, current=current, desired=desired, assessment_version=assessment_version
        )
        return redirect(ar)

    return render(
        request,
        "assessment.html",
        {
            "categories": assessment_obj.assessment["categories"],
            "assessment_version": assessment_version,
            "assessment_name": assessment_obj.name,
            "ratings": assessment_obj.assessment["ratings"],
            "wide_category": assessment_obj.assessment.get("wide-category", False),
        },
    )

There's a bunch going on there! At the top we have some custom handling for getting the assessment that the user will take. Then we have handling of the POST data that will be submitted by the user, which results in a redirect to the results page. And finally, we have the render() function that renders our template for GET requests.

Our first integration test will simply call the URL that routes to this view. The Django test runner handles a bunch of the work required to make that work. Most modern web frameworks will allow you to take a similar approach.

The initial integration test for our GET request is simple:

class AssessmentViewIntegrationTest(TestCase):
    def test_get_assessment_with_valid_slug(self):
        # Arrange: set up our Assessment and matching Form data
        Assessment.objects.create(
            name="Test",
            slug="test-one",
            assessment={
                "categories": [],
                "ratings": [],
            },
        )

        # Act
        response = self.client.get("/assessment/?v=test-one")

        # Assert
        self.assertEqual(response.status_code, 200)
        self.assertEqual(response.context["assessment_name"], "Test")

Note that I'm using the "Arrange, Act, Assert" format that we saw in the unit testing post. This is personal preference. You could use "Given, When, Then", or some other similar format. They key is to be as consistent as possible in the way that you organise your test case.

We are asserting that the Assessment object is found in our database, and that the view successfully returns with the expected object. The Django test runner handles the heavy lifting of using a temporary database and transactions to isolate the data created in each test.

This test case gives us good coverage of the happy-path GET request. Let's add another integration test for the form submission.

class AssessmentSubmissionIntegrationTest(TestCase):
    def test_create_assessment_result_by_submitting_form(self):
        # Arrange: set up our Assessment and matching Form data
        Assessment.objects.create(
            name="Test",
            slug="test-one",
            assessment={
                "categories": [("Jumping", "jumping"), ("Running", "running")],
                "ratings": ["Good", "Better", "Best"],
            },
        )

        form_data = {
            "v": "test-one",
            "name": "Test User",
            "jumping": 2,
            "running": 1,
            "running-levelup": "on",
        }

        # Act
        response = self.client.post("/assessment/", data=form_data)

        # Assert
        # success should redirect to the results page
        self.assertEqual(response.status_code, 302)

        # there should only be a single assessment created
        self.assertEqual(AssessmentResult.objects.count(), 1)

        # extract the assessment_result_id from the final URL
        assessment_result_id = response.url.replace("/results/", "").replace("/", "")

        # assert on the expected values of the AssessmentResult object
        result = AssessmentResult.objects.get(pk=assessment_result_id)
        self.assertEqual(result.assessment_version, "test-one")
        self.assertEqual(result.name, "Test User")
        self.assertEqual(result.current, {"jumping": 2, "running": 1})
        self.assertEqual(result.desired, {"jumping": 2, "running": 2})

By setting up our Assessment object inside the test we are avoiding any reliance on data that already exists in the database. The creation of this object would be a good candidate to move to a factory or generator in the future, but for now that's unnecessary. Creating the object inside our test case (or a setUp()/before() function) is important to ensure that our tests are easy to maintain and fail for the right reason.

An example of failing for the right reason is the assignment of assessment_result_id after the initial assertions. If the redirect on success stops working I want to see that with an assertion failing rather than a the .get() failing later in the test.

These two test cases give us almost 100% coverage of the code we have already written. To round that our we need to add tests for the two error states that we are handling.

class AssessmentViewIntegrationTest(TestCase):
    # ...
    def test_get_assessment_does_not_exist(self):
        response = self.client.get('/assessment/?v=fake')


class AssessmentNotAllowedMethodTest(TestCase):
    def test_patch_request_now_allowed(self):
        response = self.client.patch('/assessment/')
        self.assertEqual(response.status_code, 405)

We can continue to build out more integration tests for this view that drive improvements to the code. Test candidates might include:

It's likely that the difference between unit and integration tests is mostly unimportant. On a modern project tests similar to the ones in this post will cover far more code than tests for small units of functionality. If your goal is to verify your code's behaviour then this style of testing is the key to it.