How Your Application Architecture Has Evolved
This post discusses how application architecture has evolved over the years. From monolithic, service-oriented, to microservices, and event-driven architecture (EDA)
If you have been developing or been involved with application architecture in one way or another, then you have definitely seen a lot of changes in the last few years. So many different types of architectures and technologies have come and gone that sometimes it can be hard to keep track of them. However, when you reflect on them, they can tell a very interesting story not just about the past but where the application architecture is heading.
In this post, I will discuss how application architecture, in my opinion, has evolved in the last few years and what has been the driving factor for each evolution. We will talk about monolithic architecture, service-oriented architecture (SOA), microservices, and finally, event-driven architecture (EDA). Let's begin!
Back in the day, everything used to be monolithic. Huge teams would work on one monolithic application which will be responsible for doing a lot of things. The monolithic architecture allows you to quickly put together a prototype with one application doing everything. There is less maintenance overhead since you don't need to rely on any other team. However, as the application is pushed to production and continues to grow, things quickly get out of control.
For example, a typical monolithic application may compromise multiple layers such as the user interface layer, business logic layer, data interface layer, and data store layer. This application would take in user input, process it, apply business logic to it, enrich it with some existing data, and then might store it in a relational database for additional processing later.
There are three main disadvantages of monolithic architecture: slow rollout, poor scalability, and inter-dependency. Monolithic applications are much harder to debug and update. Large applications require a lot of time and effort to identify issues and roll out updates and by the time these updates are rolled out, requirements could have changed.
The second disadvantage of monolithic applications is poor scalability. There is only so much one application can do. In today's world, where computational resources are way cheaper than they used to be, it has become much easier to parallelize our compute by simply throwing computational resources at them. A monolithic application that used to run on a powerful but extremely expensive server can now be run on commodity hardware as smaller applications in parallel easily. Furthermore, slower rollouts (which we discussed earlier) made it more difficult to scale rapidly.
Additionally, with a large application, every little change can possibly impact one or more other parts of the application. This increases the additional risk of potentially breaking an important feature. For example, a bug in the user interface layer can impact the entire application.
In one of my previous jobs, I was working on an app that provided access to cross-asset market data (equities, fx, commodities, etc). During one release, I rolled out a new feature for our equities users but because our application was monolithic, my small change ended up breaking a very important feature of the application used by our FX users. These two features were completely independent but because they were part of the same codebase, they had several shared resources. Needless to say, the FX users were not happy.
Agile vs. Waterfall
Soon, companies were realizing that they needed to find a better way to architect their applications. Around the same time, the agile methodology was becoming increasingly popular. Previously, companies used to mostly develop applications using the waterfall methodology which meant gathering a lot of requirements, extreme planning, covering all edge cases, and then carefully release the final product with all the features in one big bang.
For some industries, this is the only way to do it due to the costs involved per iteration and/or regulatory requirements. For many others, agile methodology worked better. Agile methodology is all about releasing the minimum viable product (MVP) in quick iterations. The faster you fail and know what doesn't work, the better it is. Agile methodology, which had been around for a while, became extremely popular around 2011 when the Agile Alliance (yes, that exists) created Guide to Agile Practices.
Agile certifications and agile coaches became ubiquitous. No matter how hard you tried to hide, your scrum master would always find you for the daily scrum.
As agile methodology picked up, it became clear that there were valuable benefits to having smaller applications that could be easily updated and scaled. This brings us to service-oriented architecture. Whereas in monolithic architecture, one application did everything itself, in service-oriented architecture, an application was broken down into several smaller services based on their use case. As mentioned by this article by IBM:
The core purpose of SOA was to expose data and functions buried in systems of record over well-formed, simple-to-use, synchronous interfaces, such as web services.
Going back to our example of a monolithic application, it can be broken down into multiple smaller services:
- User interface service
- Business logic service
- Data integration service
- Datastore service
Each of these services is responsible for one specific use case. They all exist independently and communicate with each other via synchronous APIs based on Simple Object Access Protocol (SOAP). However, as the number of services would grow in your organization, it would become harder to write an interface for each service to communicate with every other service. This is when you would benefit from using an Enterprise Service Bus (ESB). ESBs allowed developers to decouple their services (see the diagram below) and made the overall architecture more flexible.
There are multiple benefits of service-oriented architecture including:
- Quick rollouts
- Easier to debug
- Clear assignment of responsibilities
- Less dependency on other services/components
With such clear benefits, most companies started to adopt the service-oriented architecture along with agile methodology but little did they know, the cloud revolution was just around the corner.
Service-oriented architecture eventually paved the way for microservice architecture which has many similarities but is different in a few subtle ways.
The most important factor, in my opinion, which led to microservice architecture is cheap and flexible infrastructure. Since it was so easy to horizontally scale your infrastructure and run your services on a cluster instead of a beefy server, developers were encouraged to write software that could easily be run in parallel on the cluster. Around the same time, there was a surge in distributed applications and frameworks capable of processing big data such as Hadoop which popularized the map-reduce programming model.
Furthermore, around 2015, AWS had become popular. AWS had been around for a while by then but it's around 2015 when the whole concept of infrastructure as a service (IaaS) really took off and it became extremely convenient to spin up EC2 instances for cheap. Startups were the first ones to embrace IaaS and were soon followed by small to midsize companies. After a lot of debate and discussions, large corporations finally embraced IaaS and decided to go with a hybrid cloud approach.
Distributed infrastructure on the cloud is great but there is one problem. It is very unpredictable and difficult to manage compared to a handful of servers in your own data center. Running an application in a robust manner on distributed cloud infrastructure is no joke. A lot of things can go wrong. An instance of your application or a node on your cluster can silently fail. How do you make sure that your application can continue to run despite these failures? The answer is microservices.
A microservice is a very small application that is responsible for one specific use-case, just like in service-oriented architecture but is completely independent of other services. It can be developed using any language and framework and can be deployed in any environment whether it be on-prem or on the public cloud. Additionally, they can be easily run in parallel on a number of different servers in different regions to provide parallelization and high availability. For example, a small data application can be run on 5 instances in a compute cluster so that if one instance fails, the other 4 will make sure your data application continues to function.
Breaking down your services into multiple microservices meant they needed to communicate with each other. Unlike service-oriented architecture, which relied on enterprise service buses and synchronous APIs, microservices utilized message brokers such as Solace's PubSub+ broker and asynchronous APIs.
Just like the transition of service-oriented architecture was fueled by agile methodology, the microservice movement was fueled by containerization. This article describes containerization very well.
Containerization involves bundling an application together with all of its related configuration files, libraries and dependencies required for it to run in an efficient and bug-free way across different computing environments.
Docker, which was initially released in 2013, is the most popular container platform. Almost every modern software these days can be run via docker. With the rise of cloud infrastructure, docker became extremely important to make sure you can run your software in new environments, especially on the cloud.
As microservices grew in popularity, so did the concept of a service mesh that allows services to stay connected mostly using request/reply messaging patterns.
A service mesh is a configurable, low‑latency infrastructure layer designed to handle a high volume of network‑based interprocess communication among application infrastructure services using application programming interfaces (APIs).
In 2014, Google open-sourced Kubernetes which allows you to orchestrate your microservices. With docker and Kubernetes, it became much easier to deploy and manage distributed microservices on the cloud. In the last few years, these two technologies have only become more popular. Today, most new startups write cloud-native microservices that can be easily deployed via docker and orchestrated via Kubernetes and many large corporations are working with such companies to easily transition their applications to the cloud.
The rise of cloud infrastructure and distributed microservices has led to the creation of numerous startups providing services for monitoring your microservices (how much memory are they consuming), automation (continuously deploying microservices across servers automatically), resource management (bidding for cheapest AWS resources), etc. If you have ever attended AWS Summit then you know what I am talking about.
As we continue to capture more and more data, we continue to find creative ways to use it. With the rise of IoT (i.e. Alexa microwave) and wearable devices (i.e. Apple Watch), there is an abundance of time-series data or events.
With so many screens available in front of us (smartphones, smartwatches, tablets, laptops, etc) where notifications can be pushed instantly, companies are finding it extremely important to become more event-driven. Their users expect real-time notifications whenever important events occur. For example, my Delta Airline app notifies me in real-time when my flight is delayed or when boarding has started. It doesn't wait for me to check manually or only check for events at a regular interval.
In this brave new event driven world, microservices are designed around events. It is still quite new and is rapidly making its way across industries.
This post has already gotten longer than I intended it to be so I am going to end it here before you doze off. Event-driven architecture is a very interesting topic and I would like to dedicate an entire post to talk about it.
In this post, my goal was to show you how, in my opinion, application architecture has been influenced and evolved in the last few years by different technologies and requirements. As most companies try to embrace microservices and cloud, others are a bit ahead and embracing event-driven architecture. In my opinion, the foreseeable future is microservices designed in an event-driven way.
A lot of what I wrote here is from my recollection and some research. You may have experienced something different in your region or organization that I may have misrepresented or missed. Please let me know what your thoughts are on these different architectures.
We Provide consulting, implementation, and management services on DevOps, DevSecOps, DataOps, Cloud, Automated Ops, Microservices, Infrastructure, and Security
Services offered by us: https://www.zippyops.com/services
Our Products: https://www.zippyops.com/products
Our Solutions: https://www.zippyops.com/solutions
For Demo, videos check out YouTube Playlist: https://www.youtube.com/watch?v=4FYvPooN_Tg&list=PLCJ3JpanNyCfXlHahZhYgJH9-rV6ouPro
If this seems interesting, please email us at [email protected] for a call.
Leave a Comment
We will be happy to hear what you think about this post