There is a trend for decoupled software architecture. It’s not really new, but it keeps growing, most recently by the microservices movement as well those who are trying to find architectures that fit well with the decentralized and distributed nature of the Web. This trend applies to all sort of software, especially to content management with the rise of concepts such as Content as a Service (CaaS), Headless CMS or decoupled CMS.
Simply put, this trend is all about building software solutions from small modules or services that are as independent as possible from the other parts of the solution.
Decoupled software architectures come with a wide range of advantages and promises. They are a step forward. They skillfully embrace the nature of the Web, which is distributed and asynchronous. They also are better at fulfilling the requirements for flexibility and for agility as well as the ever growing need for scalability, which is especially true for us in the content technology environment, where we all know the exponential gross of content.
When building software, we often draw a demarcating line between decoupled architecture and integrated solutions. Especially when looking at content management and digital experience management. different folks have covered this lately, such as Mark Rodseth from Huge in his article “Choosing a Content Management System”, or like David Buchmann who advocated for building with lower level components in his article “Decoupling your CMS”.
The decoupled approach has a growing number of advocates, especially in developer ecosystems, while the integrated suites are still the preference of many CTOs and CIOs.
Admittedly, integrated software comes with a lot of advantages too, such as the decreased need for custom development. After all, there are reasons why we still buy cars instead of building them from decoupled parts.
But is it really that simple?
Let’s explore this topic which has been discussed for years, but remains a very valid discussion. In their respective article, Mark and David drew the relationships between essential parts of Content Management. They accurately described the main function that we can think of isolating into a distributed architecture.
Yet, I find this a bit too simplistic for modern content management, which cannot exist without additional functions such as search or personalization and user management. The more we go in details, the more we can identify functions that could be isolated and decoupled
- Content Authoring
- Content Management
- Content delivery
- Site delivery
- App delivery
- Authentication, identifications
There is clearly value in having those all decoupled.
The advantage of a distributed approach is obvious to developers and architects: the promise of maintainability and extensibility, the structure to meet very specific requirements, and many, many more.
What about the Cons?
While the benefits seems clear, let’s take the counter position and analyse the danger of a distributed approach.
Many strong dependencies between some of the functions
Yes, there are some strong dependencies between the vital part of the content management platform, as seen on the diagram above.
Let’s look, for instance, at Preview. Authors more than ever need to see what the content they are writing will look like to an audience. This is one of the challenges of separating content from presentation: content needs to live by itself to adapt to any form or screen, yet still needs to be presented to authors in a specific context.
If your authoring system is different than your delivery system, you will have to connect the two in order to bring the preview to your authoring domain.
Something similar exists in the relation between Search and Publishing. Search is a key component of the back-end of a management system. If Search is an external decoupled system, you will have to pay a lot of attention to make sure the search can work well for the CMS users -- not only for end-users.
This adds constraints to your search engine. Integrating eZ with many external search, we know from the experience that there are many traps like real time indexing, capability to really index the same content structure including metadata (and not just blobs of HTML as published), dealing with permissions, dealing with multilingual….
Without naming them all, there are many other interdependencies between domains; we can think about the Cache system which is tightly connected to both the back-end and the delivery system, User Generate Content (UGC) which also impact both front-end and back-end, and URL management.
More code to develop, more risks
Going the decoupled route may make the job more interesting for architects and developers, but it will add the need for developing a lot of code to make the glue between components. And the lower you go in the granularity, the challenge of creating a custom development grows.
It will take time and energy (not to mention money) to perform integrations that work well, with the risk to have them not covering the whole scope, the risk being mainly to hurt the editorial user experience. This is typically why CIOs in large enterprise without a strong culture of being software house will be reluctant and will lean toward closing an integrated system.
If you go fully decoupled at a low level, you basically are about to embark on a very serious technical endeavor, and you should be prepare to this. You’re becoming a software company.
The best of two worlds?
But do we really have to choose between the two approaches, or could we combine the advantages of the two approaches?
I believe there is room for building an integrated platform that is in fact fairly decoupled. After all, that is what we do at eZ: build a global platform that implements components as decoupled as possible while maintaining a single solution structure. This merger of approaches helps developers customize applications and build future solutions on top of the platform, thus taking advantage of the decoupled approach while reducing the risk of an unwieldy outcome that is always possible through the decoupled approach.
Let’s take the content repository of eZ Platform. It can hold and manage any kind of content, exposing content through REST apis (xml or json) to other applications. But it is also tightly integrated with the content delivery of the platform to allow it to take care of Web publishing with its standard capabilities for that, including invalidating cache.
And there is a lot of value in such an integration that would cost a lot to redevelop if you wanted to provide features such as front page management or advanced caching.
The very same kind of thinking can be done when speaking about the other components I mentioned above: search, user management, etc.
The important thing to keep in mind is to have an open platform that uses bricks at the infrastructure level that are decoupled. That is why we decided to rely on a framework such as Symfony (vs our own framework), or on search engines such as SolR or ElasticSearch.
That is also why we decided to make all the internal APIs of our system available through a REST API available for use by external applications. Here, there are huge differences in our approach compared to a proprietary software designed as a black box.
Coupled and decoupled at the same time
So the way we do it at eZ is offering coupled and decoupled at the same time. There are many benefits to this. Our current platform does it fairly well, even if, looking forward, we want to push this approach further. Who knows when, but one day, we surely would love the platform to be fully based on microservices. This is the direction we’ll be going, but as Martin Fowler and others have voiced, most successful microservices projects started as a monolith and was broken down once the team knew the project needs.
A balancing act
So, should you go fully decoupled and add a good dose of in-house development? Should you go for an integrated CMS solution?
Our take at eZ is that it isn’t black or white and the route to success surely will first and foremost require a clear understanding of all stakeholders needs and capabilities. We aim at providing a solution which allows both routes.
(Main image borrowed from Anders Dahnielson)