- Distributed applications enable heterogeneous environments with different systems and architectures. Benefits include platform independence, availability, and scalability.
- The article shows various possibilities of configuration, architecture and modular design using various technologies and frameworks. Kubernetes is at the heart of this.
- Monitoring, analysis, and control of communications in modular networks can be done through a service mesh with or without the sidecar model.
- The twelve principles of good cloud apps propounded by Adam Wiggins apply to distributed applications to a large extent as well.
Distributed applications and distributed systems more broadly are growing in popularity, largely due to developments in microservices and container technology. Together technological advances enable independent development in autonomous teams, freedom to choose languages ​​and frameworks, and improved flexibility through scaling and load balancing.
Advertisement
Matthias Haeusler is Chief Technologist at Novatek Consulting GmbH with a focus on Cloud Native. He is a lecturer in “Distributed Systems” and a regular speaker at international IT conferences.
This article explores various ways to implement distributed application architectures using modern, cloud-native software technologies. On the one hand, these include frameworks associated with programming languages, which are especially common in the Java environment. On the other hand, there are platforms like Kubernetes and service mesh – both in the traditional style and in new variants without sidecars.
First, it is important to clarify the question of why a distributed application architecture makes sense at all and what components are necessary for successful implementation. The principles of the 12-Factor App published by Adam Wiggins in 2011 also serve as orientation (see box “Concepts of the 12-Factor App”).
It was released by Adam Wiggins in 2011. 12 factor app conceptThe co-founder of PaaS provider Heroku described a proven approach (best practice) to developing applications to operate efficiently in the cloud and make optimal use of its possibilities. The 12-Factor App defines fundamental principles that aim to make an application platform independent, scalable, and granularly configurable. Furthermore, the principles include not only cloud-specific aspects, but also general principles of good software development, for example regarding using a central version control system per component and clear separation of code and dependencies. By following the twelve principles, developers ensure that their applications align well with the cloud infrastructure and benefit from the benefits of the platform.
The following factors are particularly noteworthy:
- Factor 3: “Configuration” (separation of configuration and code): The separation of configuration and code, on the one hand, enables flexible adaptation of the application without recompilation and, on the other hand, use in different environments with different configurations.
- Factor 6: “Processes” (statelessness and scalability): Stateless processes make it easier to scale and manage applications because processes do not need to store information about their previous state.
- Factor 7: “Port binding” (linking ports and network communications): The use of standardized network protocols simplifies interactions between components and enables easy integration into different environments.
- Factor 11: “Logs” (logs as streams): Treating logs as streams enables more efficient error analysis and troubleshooting when managing distributed applications. The principles of the 12-Factor App are used beyond Heroku in systems like Cloud Foundry, Spring Boot, Docker, and Kubernetes to successfully operate modern applications in dynamic and agile environments.
Why distributed systems?
The traditional monolithic approach to software architecture is considered by many to be outdated. In particular, proponents of distributed systems and microservices architecture often talk disparagingly about a “big ball of mud”. But even distributed approaches considered modern are not free from problems, as Peter Deutsch already pointed out in his 1994 work. “Distributed Computing Misconceptions” (Distributed Data Processing Errors) which have not lost their validity to this day.
In particular, dividing an application into different modules creates network dependencies between components, which in turn affects latency, configuration, and error handling. However, in some scenarios it makes sense – sometimes even inevitable.

The following is a closer look at the advantages and related aspects. The basic goal of distributed applications should be to benefit both users and development teams. These mainly lie in non-functional requirements such as availability, reliability and scalability.
Such a system should feel like a single unit to users – as Andrew Tannenbaums says in his book “distributed systems” Prepared demand. Anyone who uses Google Maps is not interested in how many containers are behind it or what programming languages ​​are used, the only thing that matters is reliability and functionality.
disparity factor
Heterogeneity plays a central role in the theory of distributed systems, for example in relation to parallelism and concurrency. As Figure 1 shows, greater efficiency can be achieved by processing heterogeneous tasks in parallel.
Parallel processing of tasks in a distributed system (Figure 1).
Heterogeneity is also reflected in dependencies on operating systems, runtimes, frameworks, etc. (see Figure 2). In all these cases, it is not possible to bring an application into a monolithic artifact, making a distributed approach necessary.
Diversity in operating systems and technologies (Figure 2).
Finally, extensibility should be mentioned. A distributed architecture offers the advantage that new components can be integrated into an existing system as independent modules without having any significant impact on existing modules. So there is no need to recompile or package the components.
flexibility factor
The resiliency factor is primarily about making the application highly available and keeping it resilient to unexpected events, such as fluctuations in the number of users or failures of subsystems or network segments. Failure of such a component should be controllable and should not lead to failure of the entire application under any circumstances. Scaling out individual components enables reliability through redundancy. If one instance fails, there must still be enough other instances available so that the service continues without interruption (see Figure 3). It serves not only for redundancy, but also for load distribution, for example in the event of an increasing number of users, to evenly distribute the load among the individual components and thus ensure the desired performance of the overall system. Does (see Figure 4).
Reliability through redundancy. If one instance fails, service can be guaranteed through redundant instances (Figure 3).
Fluctuations in the number of users can be balanced through balanced load distribution (Figure 4).
If there is a sudden increase in the number of users or – worse – a denial of service attack, the load can increase rapidly to such an extent that it cannot be balanced by scaling. To protect applications from this, a network component can be placed that blocks or at least blocks incoming traffic. In such cases, so-called circuit breakers or bulkheads are usually used.
Another aspect of flexibility is uninterrupted availability during application updates. To ensure this, various deployment and zero-downtime techniques are available – including blue/green deployments and canary releases.
Both variants basically work on the principle that a new version of the application is deployed while the old one is still running. Whereas with a blue/green deployment the change occurs in a single phase, a canary deployment introduces the new version slowly and selectively, initially to a limited group of users, before the new release is fully deployed. Come into production.
