The Microservices Maturity Model
Understand the challenges of microservices adoption
and where most teams get stuck on their journey
to accelerate software delivery, while ensuring security.
January 13, 2023
After working with mission-critical, defense and intelligence agencies within the U.S. government over the last seven years to deploy modern software applications in some of the most demanding environments worldwide, we have developed a maturity model that helps software architects, DevOps and platform engineering teams drive microservices adoption.
This maturity model defines four different stages of adoption that organizations pass through on their journey to realize the promise of microservices to achieve greater application agility, flexibility, and scalability, while eliminating known trade-offs in complexity, security, and visibility.
Our customers have found this methodology helpful in understanding the operational benefits of each stage, while addressing the technical challenges of moving from one stage to the next as they strive to accelerate software delivery and increase speed to market, while ensuring security across hybrid, multi-cloud and on-premises environments.
Stage 0: Pre-Cloud
A monolith architecture, usually coupled with a pre-cloud suite of tools, consists of several applications running on servers in an on-premises data center. It requires large development teams to integrate separate code changes that require detailed validation before new versions can be pushed out on a quarterly or annual basis.
This process is slow and requires considerable effort for developers to update applications. Quality control and assurance are integral to ensure success before anything is deployed to production. Upgrades usually take many months and often require specific outage windows.
Lastly, significant planning for data migrations and potential rollback strategies must be considered in the case that something does not go as planned. Given the speed of business today, organizations can no longer afford to hide out in this stage, release new applications sporadically, and expect to remain competitive.
For example, many industries, such as healthcare and finance, still need to ensure customers can access the data stored in legacy systems, applications, and databases, while figuring out how to build modern, user-friendly, customer experiences in new cloud and mobile applications.
Stage 1: Cloud-Native
Most organizations have moved past the monolith stage, introducing cloud-native technologies and microservices as the bridge between legacy and modern applications. They have begun to break portions of their monoliths into separate microservices to accelerate software development. They have adopted containers as a viable mechanism to package their software. They have also implemented more agile software development practices, such as Continuous Integration and Continuous Delivery (CI/CD) pipelines, for faster application delivery, in many cases at the expense of enterprise security and governance.
In this stage, individual teams build separate pieces of an application using their preferred programming languages, development frameworks, and logging tools. They often download different sets of open-source tools, libraries, and components and stitch them together, requiring expertise and knowledge of how solutions are composed at a granular level. The challenge is how best to connect these components to enterprise assets (databases, ERP tools, services, and core business layer functions), while managing one holistic application.
The modernization process usually begins with a pilot application running 6-12 microservices in a cloud-native environment, such as a managed Kubernetes cluster. Many organizations in the cloud-native stage have begun implementing Docker and Kubernetes, running upwards of 30-50 Kubernetes clusters, but do not have extensive enterprise-wide Kubernetes or cloud-native experience.
Unlike legacy applications that sit behind hardened firewalls, where organizations manage these applications on servers or virtual machines, each microservice in a cloud-native environment is a separate networked application unto itself, with limited control, security, and visibility. In fact, the more a monolith is broken into separate microservices, the more surface area there is available to attack, which makes it harder to meet existing compliance requirements with new cloud-native applications, while auditing, reporting, and proving that Kubernetes is secure.
IT teams are often left frustrated without being able to understand, control and see what is happening in applications they are managing as they begin to discover the complexity of deploying decentralized software with microservices, APIs, and data sources across hybrid and multi-cloud environments.
Stage 2: Service Mesh
Organizations in this stage begin to evaluate the need to deploy a service mesh to provide a layer that is de-coupled from the application code to control configuration policies, route internal east-west application traffic, and enforce security across an application networking stack, which includes connected microservices, APIs, and data sources.
Security is the number one driver for organizations to move from Stage 1 to Stage 2. Because each connected microservice, API, and data source runs in different locations, IT teams must secure the connections between any data or communications flowing between “Application A” and “Application B” or vice versa.
This process is handled with mutual Transport Layer Security (mTLS) to ensure that all communications are encrypted when flowing between any applications, APIs, and microservices running within an Enterprise’s IT environment. In most cases when considering service mesh, these communications are between applications that are running within a Kubernetes environment.
Once IT teams prove their cloud-native applications are secure using a service mesh with a control plane and a fleet of data planes (proxy or proxy-less), a service mesh allows them to gain visibility into application performance, allocate the right resources to important applications, and begin to comb through statistics allowing them to trace and troubleshoot problems as they arise.
At this point, DevOps can work with Application Development teams to see how their applications are running, ensure resiliency, and meet SLA’s, SLO’s, and other metrics. They are betting that mTLS encryption will be enough to meet security team requirements. They believe they are finally ready to roll out into production.
But this is where almost every company gets stuck — in a dev environment — when they realize that a service mesh is not enough on its own to control complexity in a live production environment, managing different clouds, VMs, Kubernetes clusters, and exposed APIs.
Stage 3: Application Networking
In order to move from Stage 2 to Stage 3, DevOps and platform engineering teams begin to realize that it is exponentially more complex to run decentralized, microservices-based applications in a live, production environment focused on Day 2 operations.
Organizations at this stage have implemented a service-centric enterprise with many applications running that connect to APIs and managed data services. Some organizations may have a service mesh and follow modern CI/CD processes for deploying new application changes in cloud-native environments, with separate teams working on different parts of an application running on AWS, Azure, or Google Cloud.
Once an application needs to go into production, however, organizations need to allow users to access all the various parts of an application. IT teams want to integrate SSO, Microsoft Active Directory or an independent IAM tool with their service mesh, Kubernetes, and applications, but often struggle to do so.
This is when identity management, user authentication, role-based access control, and detailed user-tracking audits become major barriers to wide-scale adoption, as IT teams grapple with managing an ever-growing suite of third-party, cloud-native, middleware to connect legacy databases, APIs, microservices, and data sources across decentralized applications.
It’s inefficient to have separate AWS admins, Azure admins, database admins, and network admins configure the right settings for “User A” to have the right access to the right systems at the right times, while ensuring that “User B” has access to different resources at different times.
Additionally, CISOs and security teams need to know what controls are in place to know who has access to which applications, ensure that the applications are isolated in multi-tenant environments, and prove compliance with security industry best practices across public and private clouds.
Most organizations don’t have the internal experience to manage this level of complexity, and IT teams that have implemented service mesh solutions often realize they can’t move their applications to production without a more comprehensive microservices solution or a large amount of out-sourced services expertise.
Greymatter.io Was Built to Help Enterprises Move to Stage 3
These are the same challenges the world’s largest defense and intelligence agencies ran into and why greymatter.io has spent the last seven years focusing its efforts to build the best and most enterprise-ready application networking platform, addressing the issues Stage 0, 1, & 2 organizations face when moving to Stage 3.
Therefore, we built an enterprise application networking platform that combines service mesh, API management and infrastructure intelligence to help organizations reduce complexity, ensure security, enforce compliance, and optimize performance across any environment.
We understand that application networking and service-centric enterprises extend beyond Kubernetes. Our platform was built as a bridge to the future for enterprises that want the benefits of APIs, microservices, and service mesh now — in months, not years — because organizations still use monoliths, VMs, and bare metal.
So, if you’re at Stage 0 or 1, we can help you lay out your reference implementation architecture in 30 days or less to start outlining how to make sure everything integrates and prevent you from hitting roadblocks when you attempt to move from Stage 1 to Stage 2.
If you’re stuck in the mud in Stage 2 and spinning your wheels trying to get to Stage 3 — or if you are in Stage 3 and realize that the increased complexity of managing a decentralized environment is consuming your platform engineering resources — we can help you assess your application networking architecture and implement a model that enables zero-trust security, mTLS authentication and end-to-end encryption in 90 days or less.
Contact us today to schedule your free consultation, determine your microservices maturity level, and build a reference implementation architecture to begin moving your organization up the microservices maturity model to accelerate software delivery and increase speed to market, while ensuring security.