As part of DevOps, many companies today are migrating to microservices architecture with Kubernetes and other open source technologies. In this blog, we share the benefits of Kubernetes for microservices architecture and give you tips on how to transition.
Using Kubernetes can help you innovate faster. Teams using Kubernetes achieve higher productivity. But you can't have a monolithic application in Kubernetes.
Kubernetes is, however, a great platform for microservices architecture. Using Kubernetes containers makes it easier to build a microservices architecture. And there are many benefits to a microservices architecture with Kubernetes.
The top benefits of using Kubernetes for microservices architecture are:
Kubernetes Is Just One Part of Your StackYour microservices architecture is built on other open source technologies too. Find out how they work together — and what you need to build an ideal stack. Use the interactive stack builder to get expert advice.Build My Stack
Your microservices architecture is built on other open source technologies too. Find out how they work together — and what you need to build an ideal stack. Use the interactive stack builder to get expert advice.
Build My Stack
Here's how to transition to microservices architecture using Kubernetes.
Before you tear down your existing architecture, you need to understand it.
Here's an example based on when I worked with a team rearchitecting for Kubernetes.
This complex architecture represents a line of business (LOB) software developed for a highly regulated industry that needed to meet pharmaceutical-grade chain-of-custody requirements in the United States.
By documenting the architecture, you can understand what's working and what's not working. This includes the current technologies you're using.
You also need to assess your current technologies.
Here's what this example LOB application used:
All of these services were highly available, and most were available as Active/Active services. Camel and SwitchYard were stateless, KIE jBPM was configured with ZooKeeper for replication of the GIT filesystem, and Infinispan was used to cluster the session information. PostgreSQL was managed by PGpool-II, and the native Elasticsearch clustering was utilized for Elasticsearch.
You also need to evaluate your resource usage.
The LOBAPP deployment in WildFly was collocated with the KIE jBPM Workbench deployment. So, the high availability was performed in a Hot Active/Passive manner. This guarantees uptime, but doesn't reap any of the rewards of running an n-tier, or even load balanced, system.
To avoid bottlenecks, customers were given two VMs per deployment. This further impacted resources and costs for the LOBAPP, which was sold as a SaaS. This hurt the company that was developing the software, as stakeholders questioned the ability to scale the model.
Nginx was used to create path-based load-balancers internal to the services on the VM. Puppet and Foreman would deploy this incredibly complex VM. The system ran well and didn’t require a lot of intervention. It was clean and easy to troubleshoot.
However, the biggest problem was resource utilization and time-to-live for new customers.
Sales people had to wait up to an hour for a new instance to finish provisioning. Teams would prep a day in advance of a meeting just to make sure the environment was ready for a demo, eating up even more resources on the private cloud. This SaaS offering was eating too much of its margin on this expensive per-customer deployment.
The decision to migrate to a multi-tenant microservice mesh would greatly improve the scalability of this system. Additionally, this wouldn’t require a major rewrite, especially because of the message-driven nature of the application.
Let’s walk step by step through the architectural decisions that were made when migrating the application from a VM monolith to a Kubernetes multi-tenant microservice.
Early on in the architecture process, we decided that because of the messaging-driven nature of the application, the microservices architecture should look exactly like one of the VMs, but exploded:
The WAR deployment methodology should be similarly exploded, breaking the services apart by domain, and interconnecting them by adding a message broker like ActiveMQ. Calls to HTTPS endpoints would be handled by a service mesh, like Istio.
How could we say that with confidence, though?
Architecturally, we understood two things as absolutes:
Next, you'll need to determine how communication will work in your microservices architecture with Kubernetes.
To do this, you should be able to enumerate the services on your current architecture externally and “internally.”
Externally is easier because you can ask the firewall team, but internally is a bit more difficult. How does Module A in your service communicate with Module B? If it’s not using HTTPS, is it using a portable protocol that’s going to be compatible with your service mesh? Can it be encapsulated in HTTPS, perhaps over websocket?
Finally, you'll need to consider security.
Because the application had already implemented JWT and SSO correctly (through Keycloak), implementing multi-tenancy wasn’t an issue as far as authentication was concerned.
All of the customers already came through url.tld/app/customerinstance to be routed to their VM. A soft-multitenancy model could continue to be supported in this way. The global VIP outside of the private datacenters could load balance traffic to a geographically appropriate DC or a hot-spare DC.
That’s where this exercise can get exciting, because in the case of this application, the monolith deconstructs pretty gracefully. But what if the “WildFly / KIE jBPM / Drools” aspect wasn’t something that could be containerized or even virtualized? What if it was an AS/400 mainframe?
This is where the abstraction layers in Camel can get exciting for your team: extract as much as possible away from the monolith, so that you can n-tier and elasticize the workload.
Transitioning to microservices architecture and adopting Kubernetes can be challenging. Unless you enlist the help of the experts.
OpenLogic experts are skilled in all things microservices and Kubernetes. We can help you:
Talk to an OpenLogic expert today to learn how we can help you get the speed, flexibility, and ease-of-use of Kubernetes and microservices architecture.
Talk to an Expert
Enterprise Architect, OpenLogic by Perforce
With over a decade of experience in enterprise software architecture, engineering, and operations for the Fortune 500, Connor is working to build and support cloud native solutions for OpenLogic customers around the world.