I recently presented on the topic of Breaking up Monolithic Websites with Azure and I thought it also worth sharing as a blog post. If you would like to watch the full talk, you can see the video on YouTube.
What is a Monolith?
A software monolith is one in which all layers of the application are coded, managed, and deployed together. This includes the user interface, business logic and possibly datastores. There are scenarios where monoliths can be beneficial, making it easier to maintain, find code and quickly deploy. That said as the monolith grows the complexity can cause some challenges.
Issues at Scale
Although there are upsides to a monolith, as systems grow, become complex with many separate features and have a large number of developers contributing, many of the benefits of a monolith become it’s burdens.
Large Unmanageable Codebase and High Cognitive Load
As the system grows as all code features are in the same solution it becomes very easy to blur code boundaries and allow features to become tightly coupled. It can become hard to find where to make changes and hard to identify what other parts of the code may be impacted with those changes. Unit tests can help, however as the code grows you can end up with thousands of tests to run and maintain.
In addition, as the code gets more complex it gets to the point where no one person can understand it all, causing high cognitive load.
Lack of Ownership
If we have a large team working on the single code base, we can have a system where everyone owns everything, and no one owns anything. This becomes apparent when issues come in and determining who should fix them. With a lack of ownership each new feature may get allocated to a developer that had not worked on that area before, so they first must learn how to work with that part of the code. These can add up to an inefficient process and frustrations in the team.
All or Nothing Scalability
With the monolith being deployed as a single block, if you run a campaign pushing people towards one part of your website you must scale the entire system. This can be costly and slow to perform.
With a large system with lots of tests it may take hours to deploy. As we have a single code base and many contributors, we only have two real options in how we manage deploys.
We can practice continuous delivery and deploy each feature as it is ready. In this case each person needs to queue up behind the last. This runs in to trouble as you can be delivering features quicker than they can be shipped or run out of hours in the day.
Alternatively, you can batch up your changes. The downside here is that with so many contributors and features when something goes wrong it means a full roll back. Following this there is then a lengthy process of unpicking the code to find the breaking change and then find the person who last worked on it.
3 Part Solution
To solve the challenges we need to break up the monolith into smaller components owned by separate development teams. These smaller components can be easily understood and independently deployed, managed and scaled. To achieve this requires a three-part solution.
People and Team
Conways law states “organizations design systems that mirror their own communication structure”, so in turn if you have a single pool of developers who look after the application you are likely to build a single monolith. If you have a backend team and a front-end team you are likely to build two monoliths.
To split the monolith in to separate parts, Conways law would suggest we need an organisation structure to mirror this. The Team Topologies book is a great resource for learning more about this topic.
The software is obviously at the heart of this challenge. It could be tempting to decide to start again with the “We are going to build it right this time” view. The reality is that this is hard to achieve. It is unlikely that the large monolith was created overnight, it may be 5 or 10 years old so how long will it take to re-write the entire system. During this time, it may also cause much disruption and all feature development may have to go on hold.
A much nicer and more manageable approach is to tackle this as a set of smaller projects. We can then tackle each at a manageable pace.
There is lots of information on breaking up the backend logic into micro services. As this is not customer facing, we can be free to arrange these how we want. Breaking up the website is harder as the customer still needs to see a single website even if it is hosted and developed as multiple smaller websites. This however can be achieved through Azure Infrastructure.
It is likely the monolith has existed for many years. If we split up the website into many small websites, we would need to ensure the URL structure does not change. For a new site we could use subdomains however to split up the website without changing the URLs we can use Azure Infrastructure.
Routing and Load Balancers
Load balancers can be useful to distribute traffic load across multiple servers. There are many load balancers inside Azure that are not exposed, such as when we scale out an App service plan, an internal load balancer is used to share traffic between the instances.
A layer 4 load balancer works at the transport layer, distributing packets of data but not looking inside them. A layer 7 load balancer on the other hand works at the application layer and can route based on information inside the packets of data. This means based on the URL we can route the traffic internally to a website hosted in a different location.
This allows us to create vertical slices of our application while the user still sees a single website.
Azure Front Door
Azure supports a number layer 4 and layer 7 load balancers. To split traffic, we need a Layer 7 Load balancer which gives us the options of either an Application Gateway or Azure Front Door. The application gateway exists inside a single region so we could either pair it with traffic manager or use Azure front door instead. Azure front provides other features so is often a good choice.
At this time Front Door comes in three SKUs; Front door, Front Door Standard (preview) and Front Door Premium (preview). These all work in the same way although the preview SKUs have additional features.
Each of the new microsites can be configured as a backend pool / origin group. Routes are then configured to direct traffic to these based on URL patterns. The monolith can be added as a fallback and over time new microsites added until the point that you can remove the monolith.
This approach also allows for a polyglot landscape using the technologies that best suit that part of the website.
You can also override routes based on HTTP headers and other application level attributes.
Other Azure Services
We have talked a lot about splitting up the monolith into smaller websites and backend services however we still need to display a coherent website with a single menu, set of styles etc.
We can use Azure App Configuration to share configuration between the new micro sites and the remaining monolith. This can be attached to key vault with key vault references if that shared config contains any secrets.
In this post we looked at how monoliths can become problematic at scale and how the solution takes looking at the Team, the software, and the infrastructure. We saw how we can use the layer 7 load balancer front door to split up the website while keeping the URL structure in place for existing users. If you would like to watch the full talk, you can see the video on YouTube.
Title Photo by Zoltan Tasi