Plug and Play Services and Testing
I got to thinking...
One of the reasons some people give for having a complete test suite is that it makes it easier to change code. In the case of a microservice architecture I have seen people motivate microservices with the possibility of replacing them based on tests. The idea being that if you have a solid test suite you can re-implement a service safely.
What I have realized is that most of the applications I have been involved with, or read about, the tests are part of the application code base. The issue that I was thinking about is that these tests are usually tied to the implementation either directly or indirectly.
Let's take an example. Suppose I wrote a web service in Java and built a lovely set of tests for it in JUnit. Later, if I decide I want to port the service to Go the JUnit tests are probably not going to work for testing my Go service. Rather, I will have to rebuild them in Go. At least, that is the path I have seen most people go down.
I can think of several solutions for this issue:
-
Write a set of tests that are true black box tests for your service. These could be written in any framework or language, and must access the service as it will be accessed in production. So for a web service using HTTP, the tests will use an HTTP client library to access it. By creating, what are basically "integration" tests you can test a new service with the same tests as the old service.
This method doesn't preclude having separate unit tests.
-
Rely on client tests. For example, you might use selenium to test your web interface, and then run these on the new and old service implementation. The problem with this mechanism is that it is basically the same as the first solution, but with the client as a mapping between the tests and the server. Basically, too many layers and it requires that the client tests cover every situation.
-
The final idea I had is based on a talk from GopherCon. A speaker from Parse/Facebook was talking about how they rewrote their stack in Go. Part of their testing was to create a shadow cluster of their services running the new versions and compare results between the old and new implementation. Basically requests are sent to both places, and the results are compared for discrepancies. This idea of using real world input to test a new service leaves open some edge cases but could be made very generically. I am thinking that even if you built the tests from bullet one, it would still be useful to implement this step during deployment, before making the new services the ones the client "sees."
So those were my thoughts and possible solutions. I will post again if I get to put these ideas in practice.