Recently I’ve been considering the initial step in the continuous deployment pipeline. This is the development step. The development step is where the real work happens. Everything after that should be automated tests followed by a production deploy. Each step in the pipeline must verify the next’s pre-requisites and prove the absence of known regressions.

I consider the development phase to include writing code, committing to source control, then pushing code to trigger the first continuous integration step. I say first because continuous deployment pipelines often have more than one automated testing step. Code first goes through a set of local tests, then probably some sort of cross service/component integration test, hopefully some sort of performance testing, perhaps even security testing, then even more business specific tests, before finally deploying. Let’s consider the development phase bounded to everything under source control for a particular project.

Each commit’s goal is to make through CI with the highest quality code as possible. Repositories should include these checks as a minimum (in no particular order):

  1. Code linting / formatting. Code must be consistent across the project. Inconsistencies slow down individual developers and take more time in code review.
  2. White box unit and integration tests. Each codebase’s technical structure varies. I expect every repository to have tests for individual functions/classes etc, integration tests for multiple objects, and integration tests across internal design boundaries. I say “whitebox” here because these test use the objects/functions themselves. These tests ensure the internal code works as expected.
  3. Blackbox smoke tests for every artifact. Consider an HTTP & JSON API. The test should start the process (using whatever command would be executed in production using the built artifact) then make requests to all expected paths. The tests assert clients are able to make a request and get a response back. Thrift/gRPC/other RPC servers should start the server and use a client to make a request to all expected RPCs. These tests should be concurrent and randomized to expose any strange in implementation details. The tests ensure build artifact works as expected.
  4. Utility command smoke tests. Many projects have commands to prepare their environment. This may be a command to create a database, apply migrations, or bootstrap a third party service. The test process should smoke test these commands and assert they do not fail. Their internal behavior is tested via the whitebox tests.
  5. Boot Tests. Every application requires some configuration, however it’s rare the configuration works as expected. I’ve seen it happen to many times where processes fail to boot in staging/production/etc because it’s the first time that code path was executed. This is unacceptable. Boot tests are paired with some sort of “dry run” mode. These are matrix level tests. This ensures that all document configuration values are parsed/handled without causing errors or other unexpected failures. These tests are required because the implementation is usually covered in whitebox tests, but not covered when booting the process and specifying config files, command line options, or environment variables.
  6. External Config File Tests. Projects usually rely on third party services for a number of things. Your CI system may have a JSON file or YAML file describing the build process. These files may be changed incorrectly. The affect may be felt immediately or maybe not until code is deployed to production. All external configuration files (JSON,YML,TOML,etc) should be smoke tested a by passing through a parser at a minimum. Use case specific linting tools should be used where possible. This tests for unknown keys or configuration options. You’d be surprised how easy it is to eliminate an entire class of regressions with this approach.

These points have eliminated every regression class I’ve come across. Following these points ensures that individual repository is functioning accordingly. Further bugs/regressions are found in the integration step (if there is one).

I suggest you adopt these practices in your team. These points have drastically lowered the amount of defects leaked to production, thus ultimately raising the quality level. You’ll be surprised what they can do for you.

Good luck out there and happy shipping!