How Does Low-Code Fit Into the Cloud-Native World?
In this article, find out how low-code fits into the cloud-native world.
Defining the Terms
Low-code tools simplify and accelerate the work of professional developers by providing a visual model-based environment for creating applications. Low-code frees developers from most of the burden of hand-coding integrations, security capabilities, and other ‘plumbing’ code so they can focus on higher-value tasks centering on business needs.
Cloud-native computing extends the best practices of the cloud to all of enterprise IT, including horizontal scalability, elasticity, subscription-based delivery models, and more. Hybrid IT, edge computing, zero-trust security, and DevOps are all part of the cloud-native computing story.
Today, Kubernetes is in the eye of the cloud-native computing storm, as containers and microservices have been driving much of the innovation. Be that as it may, cloud-native is much broader than Kubernetes, covering the full gamut of environments from traditional on-premises to virtualized to serverless.
Finally, we define microservices as cohesive, parsimonious units of execution. Cohesive means that each microservice does one thing and does it well. Parsimonious refers to the fact that microservices should be as small as practical but no smaller. And unit of execution refers to the fact that microservices consist of modular, executable chunks of code that interact with other microservices (and anything else) via APIs.
The Other Side of Microservices
At first glance, low-code and cloud-native don’t seem to have much to do with each other — but many of the low-code vendors are still making the connection. After all, microservices are chunks of software code, right? So why hand-code them if you could take a low-code approach to craft your microservices?
Not so fast. Microservices generally focus on back-end functionality that simply doesn’t lend itself to the visual modeling context that low-code provides. Furthermore, today’s low-code tools tend to center on front-end application creation (often for mobile apps), as well as business process workflow design and automation. Bespoke microservices are unlikely to be on this list of low-code sweet spots.
It's clear from the definition of microservices above that they are code-centric and thus might not lend themselves to low-code development. However, how organizations assemble microservices into applications is a different story.
Some low-code vendors would have you believe that you can think of microservices as LEGO blocks that you can assemble into applications. Superficially, this LEGO metaphor is on the right track – but the devil is in the details.
When microservices first came on the tech scene, developers were excited to build applications with them to be sure. However, because they were parsimonious, developers needed more of them to build any serious business functionality than they were used to from traditional object-oriented development approaches. And then they also had to work with operators to manage, secure, and scale them.
It soon became clear that setting up some kind of microservices free-for-all where any microservice might interact with any other microservice was a path to excessive complexity that would impede the ability to scale the application building effort and manage the resulting deployment.
For this reason, you should follow the LEGO block metaphor for microservices with caution. Any vendor that suggests that their low-code tool should assemble microservices willy-nilly is practicing serious hand-waving.
The Rise of Cloud-Native Architecture
The challenges of willy-nilly microservices assembly helped drive the development of container orchestration platforms like Kubernetes, as well as a broader set of best practices at the heart of cloud-native computing we call cloud-native architecture.
Cloud-native architecture both informs and leverages how Kubernetes goes about orchestrating containers via pods and clusters, but that’s just scratching the surface.
As with other architectural approaches, cloud-native architecture is inherently technology-neutral. Instead, its most important characteristic is how it delineates the coherent and comprehensive abstraction that defines how cloud-native computing works.
Abstractions are intentional simplifications that hide the underlying complexity of technology by providing useful representations of that technology to its users. Typically, abstractions apply within particular technology contexts: compilers abstract object code, virtual machines abstract physical servers, etc.
With cloud-native architecture, in contrast, the abstraction extends across the entire IT landscape, from on-premises to the edge, from the cloud to serverless computing.
In essence, we’ve drawn a water line across everything we’re doing. Below the line is the infrastructure that supports the abstraction. Above the line are businesses, customers, users, and anyone who wants to build applications that leverage abstracted IT assets.
Once this abstraction is in place, the role low-code plays in the cloud-native world becomes clearer. Above the abstraction, we not only have composable microservices – we have secure, managed, scalable representations of software functionality that lend themselves to low-code application construction. Whether this functionality be microservices doesn’t matter to people above the water line.
Cloud-native architecture is thus the key to empowering low-code to work with microservices — or any other software capability — at scale.
Low-Code Below the Water Line
When they work properly, abstractions hide all manner of complexity from view – but that complexity still exists. If anything, supporting seamless abstractions actually requires additional complexity below the abstraction water line.
Such is the challenge with cloud-native infrastructure. Anybody who’s had the pleasure of working with Kubernetes (or any technology in its ecosystem) will agree that this world is onerously complicated.
Fortunately, there are core cloud-native architectural principles that help with this complexity: trustlessness, statelessness, and codelessness (see my article Why Less Is More is the Secret to Cloud-Native Computing for an introduction to these topics).
Of these three, codelessness is the one that links cloud-native computing and low-code. Codelessness is essentially the ‘cattle not pets’ principle that declarative descriptions should drive all infrastructure configurations. If you want to change anything in the production environment, update its description (or recipe, or manifest, or chart) and redeploy.
This declarative principle is at the heart of the infrastructure as code movement – except that cloud-native codelessness takes this movement even further. After all, you don’t really want infrastructure to be code. You want it to be a declarative representation of desired behavior.
So far, so good – only a static representation of desired behavior also doesn’t go far enough. Think a single YAML file or manifest or recipe. How are you going to update, version, test, and manage those representations?
Instead, we want an abstracted model of the representation of the desired behavior. Such models don’t simply capture the behavior itself – they also capture the ability to change such behavior as well as the constraints on such change, and empower the people interacting with them to implement and manage such changes.
We have now come full circle. Why? Remember what low-code tools do. They provide – you guessed it – abstracted models of the representation of desired application behavior. In other words, the codelessness principle of cloud-native computing and low-code tooling are made for each other.
The ZippyOPS Take
Given how engrained the codelessness principle is in the cloud-native world — even though the phrase ‘infrastructure as code’ is far more common — you might think that low-code would have already found a place among cloud-native infrastructure teams.
Such is not the case. Instead, the professionals on such teams are using an entirely different set of abstracted models via the command line interface (CLI).
Where application builders above the cloud-native water line find value in visual models, engineers below the water line would rather peck out condensed instructions one character at a time.
I’m not about to get into an argument over which approach is better. Each serves its own purposes. Instead, I’d like to make two important points.
First, the CLI-based configuration popular in cloud-native infrastructure circles is itself a low-code construct (or at the least, analogous to one).
Second, if any vendors decided to create visual model-based low-code tools for cloud-native infrastructure, they might find a willing audience. After all, CLI isn’t necessarily always the best approach, even for cloud-native infrastructure engineers.
We Provide consulting, implementation, and management services on DevOps, DevSecOps, DataOps, Cloud, Automated Ops, Microservices, Infrastructure, and Security
Services offered by us: https://www.zippyops.com/services
Our Products: https://www.zippyops.com/products
Our Solutions: https://www.zippyops.com/solutions
For Demo, videos check out YouTube Playlist: https://www.youtube.com/watch?v=4FYvPooN_Tg&list=PLCJ3JpanNyCfXlHahZhYgJH9-rV6ouPro
If this seems interesting, please email us at [email protected] for a call.
Recent Comments
No comments
Leave a Comment
We will be happy to hear what you think about this post