Developers are often advised to use services as architecture. But it is not so easy to build services in a way that they are both lean and scalable. If you have doubts about your own approach, dealing with services immediately becomes a brake. Several years ago, Heroku recommended this for cloud-native applications 12-Factor Apps: Twelve rules that – according to Heroku – have proven to be a viable foundation. But some of these rules are outdated and were never explicitly designed for services.
Advertisement
Golo Roden is the founder and CTO of Native Web GmbH. He is engaged in the conceptualization and development of web and cloud applications and APIs with a focus on event-driven and service-based distributed architectures. His guiding principle is that software development is not an end in itself, but it should always follow an underlying professionalism.
So what could be more clear than combining this with my experience over the last few years in the field of web and cloud services conceptualization and development, and formulating twelve update rules? That is exactly what I am trying to do in this blog post.
However, I will say one thing straight away: the selection of rules is certainly (as was the case with Heroku) subjective and colorful. What is important to us Native Web GmbH What works well doesn’t have to work well for everyone. As they say in English: Your mileage may vary (YMMV),
Rule #1: Professionalism in the foreground
The first rule is probably often mocked, but I still consider it the most important. And even though many will say “it’s logical, you don’t need to mention it,” practice shows me that in 99 cases out of 100 it is ignored. The first rule is: before developing, it must be clear what technical problem is behind the application. Anyone who cannot answer it will not be able to develop a targeted and technically adequate solution, completely independent of the technology used.
Recommended editorial content
With your consent, an external YouTube video (Google Ireland Ltd) will be loaded here.
Always load YouTube videos
12 rules for great (micro)service architecture
So: the first thing you should do is develop a strong, well-founded and detailed technical understanding of the relevant domain. It is completely irrelevant whether you use domain-driven design (DDD) or some other methodology, but you must mentally move away from CRUD (create, read, update, delete), otherwise you will end up building “forms on data”. But not the real core of the matter. CRUD is an anti-patternIt should be avoided.
Rule #2: Services, Instances, and Processes
The second rule states that a service always corresponds exactly to one operating system process. In other words: a single service does not consist of tens of separate programs, but rather a service consists of one codebase for one process in a Git repository. If multiple instances of this service are to be started, then of course multiple processes are running, but there is only one operating system process per instance.
This also means that you do not put several services into a common Git repository, but that each service has its own repository. The reason for this is simple: it is the only way to ensure that each service can be independently maintained, versioned, documented, deployed…. Furthermore, assigning services to repositories makes it possible to micro-manage the permissions of the code of the related services.
Rule #3: One build script for all systems
The third rule requires that there be a script for building a service that can run anywhere. The technology is completely secondary, theoretically a simple makefile is enough. The point is: getting from source code to an executable binary should require no more effort than executing a single command. This command not only compiles and links, but also runs tests, code analysis, and everything else that surrounds the build.
It is important that this script can be run in an identical form on every computer on the team, meaning that every developer on the team can build independently locally, and that the same script is also used in the build phase of the CI/CD pipeline. The script should be able to run successfully after cloning the repository and installing dependencies (in addition to setting any necessary permissions). Anything beyond that is unnecessary manual effort.
Rule #4: Stable main branch for release
What comes into play in construction is the fourth rule, which concerns an appropriate branching strategy: Specifically, this strategy looks like this: main
– The branch that is considered stable by definition. Any development, whether it is adding new features, fixing bugs or anything else, happens in managed separate branches main
Branch off and then come back again main
be merged (this matches the classic approach of feature branches).
The only way to do such a merge main
This has to be done via a pull request, in which all tests, code analysis, etc. are run again. Also, each pull request must be submitted beforehand main
Can be merged, will be reviewed thoroughly. Except for stable production branch main
Otherwise it is only the short-lived branches that are dealt with main
The vines all around. And every time a branch follows main
merged, it was crushed, and on a new basis main
A new release of the service you are using Semantic version Make
Rule #5: Versioned Containers
The fifth rule states that every commit is generated by a pull request main
Creates not only a new version, but also a new Docker image that is also versioned. Any version of the service can be downloaded and executed at any time based on its version number, either as a binary or as an entire Docker image.
It’s important that the service starts quickly, which is why you’re not allowed to create a Docker image now. But all of this must have happened before. Presented here Multi-stage construction To keep the final productive image as clean and small as possible. Since containers may expect to be killed at any time, there may be no special requirement to shut down the service smoothly.
Rule #6: An HTTP-based CQRS API
After the previous points related to the infrastructure for services, we now come to the content. Each service has an API through which it can be accessed from the outside. By default, this is a very classic HTTP API, i.e. no gRPC, no GraphQL, not even HTTPS. The reason for this is simple: as long as you use simple, unencrypted HTTP, it is easy to test and debug the API. In addition, HTTP is compatible with practically every technology and platform, you don’t need a special client (in principle even curl is enough), and you save all the effort on certificates and the like.
This HTTP API consists of three parts:
- On the one hand, there are technical commands. These are POST routes that cause some technical change in the service.
- Second, there are technical queries. These are GET routes that query and return data from the service.
- Third, there are some technical routes, specifically a ping and a health route, to determine if the service is running at all and, if so, what its state is.
If commands and queries don’t mean as much to you as words, take a look at design patterns CQRS But. There we also come full circle to the first rule, which says to focus on professionalism and not CRUD.