Integration Test
When working on a Main Line, developing with Small Development Tasks and writing Unit Tests you realize that you might be missing something. This pattern describes a way to ensure that your Unit Tests didn’t miss anything significant.
Speed v Completeness
Unit Tests involve assumptions about how the system works and are motivated by an analysis of what you expect the system to do. You may miss scenarios, either by an oversight or incomplete specifications.
Unit Tests are valuable in testing their assertions, but they are based on assumptions. A unit test uses Mocks to simulate responses from services, but the mocks are based on specifications. You could write better specifications, but the return on effort may be small: Software is complex, and a perfect specification may still miss something. And keeping a specification up to date has overhead.
Larger scale tests are complex and costly to run in terms of time and perhaps infrastructure. It can be challenging to identify all possible problems based on high-level inputs. And relying too much on end to end testing can leave you open to finding errors in deployment.
Software systems are complex, and it can be hard to identify what might happen a few layers down in a system, especially when user interaction is involved.
You may have a defined interface for some integration points, but the interface specification might not be fully complete or correct, for example, with a third-party integration.
Test End to End, Simply
** Write automated integration tests that confirm basic end-to-end functionality. These tests should run after any deployment, and after a merge to the Main Line. When the tests expose errors, develop related Unit Tests to allow identifying problems sooner.**
An Integration Test tests the behavior of a system when it’s in a deployed state. An integration Tests is an opportunity to identify gaps in other forms of testing,
An Integration test can be a:
- Smoke Test: a Test that verifies basic functionality to confirm that the application still runs and can reach all of its dependencies.
- Regression Test: A suite of tests that confirm that other issues found in integration are identifies
- Comprehensive tests: Tests that involve complex scenarios to confirm critical functionality. Comprehensive tests should be limited to critical business functionality that isn’t reliably covered in Unit Tests.
(You will also want to verify connectivity in a Health Check.)
Run integration tests:
- In a development environment before merging code
- After merging to the main line and deploying to a testing or staging environment,
- As part of a deployment verification process after a deployment to production. In this case, you might want to select certain tests that verify read-only operations or do benign operations such as a login.
Integration tests are costly in terms of time to run and complexity of set up and tear down, so you want them to be minimal. An integration test is preferable to a manual post-deployment verification.
If a problem appears in an Integration Test, you’ll want to identify lower-level tests to test the components that caused the error thoroughly. Many errors can be traced to incorrect handling of poor inputs, and Unit Tests will help you identify the correct way to handle those.
Reports from end users, and errors detected in application logs can inform opportunities for developing Unit Tests and Integration Tests.
Cautions
- An Integration Test can require some infrastructure to set up. While the same infrastructure can help you move toward a Continuous Deployment model, be careful not to spend effort on this at the expense of Unit Tests
- While Integration Tests are valuable, be careful to understand their limits
- The scaffolding you might need to run a unit test (a test user, clean database, etc) might give you a false sense of security.