A Roadmap From Monolithic Web App to Cloud Native Service Mesh
The landscape of the Internet is in the process of drastically changing once again. Web 3.0 is barreling down on your organization and has probably already placed its first few incessant knocks on your front door even though you may have not realized it yet. The monolithic 3-tiered web application architecture we have relied on so heavily over the past 20 years is on its deathbed and its death rattle is rapidly approaching. Even though many of these systems will persist into the future much like the legacy applications before them, organizations that are not prepared for this transition will find themselves shut out of this new playground and the markets they inevitably enable.
The move to serverless architecture
Instead of the heavy and immotile JVMs with their voluminous heaps that dominated the web application space of yesterday, the transition to highly motile lightweight serverless architectures is well under way. In the same way the monolithic web application architecture paved the transition to Web 2.0, interoperable microservices exposing APIs that are defined by service registries, running across public and private high performance service meshes will be the basic underpinnings of the next Internet evolution. Unfortunately, the monolith and the service mesh, by their nature, will not play well together and, as more industry leaders make the move to a service mesh infrastructure, your organization will need to make that change if you want to be able to play with the cool kids. The good news is, you can get there from here. With a little bit of planning, you can support your legacy monolithic web applications while simultaneously migrating to microservices and a service mesh in a planned and controlled manner. Utilizing a staged implementation (for example, going from a monolithic environment to a monolithic/mesh hybrid and then finally, a holistic service mesh) will enable your organization to pick and choose when and how you migrate your services and applications to this cloud native solution.
The monolith/mesh hybrid
In the hybrid, the goal of your organization is to stand up the basic technologies that are required for a service mesh to operate and integrate them into your monolithic architecture without effecting your existing applications. This allows your organization to start porting your services and applications from the monolith to the mesh methodically within your current development lifecycles while maintaining current service levels.
The image above shows, in a general sense, what this would look like logically. We can see some familiar elements of our monolithic tiered infrastructure, and see some familiar concepts as well. However, in the hybrid we will be introducing two new key technologies that your full service mesh will eventually leverage.
Serverless-based application delivery
The first piece of tech we'll want to introduce to our enterprise stack is some form of serverless-based application delivery that will be responsible for running our microservice APIs. A serverless-based application is basically the core of any microservices architecture. It allows for an application to be deployed with its required framework and the boot strapping needed to launch its own JVM and serve requests from an individual artifact without deploying a full-blown J2EE application server. This method of app delivery is much more lightweight and portable than the application servers most organizations use today, it also enables more elastic scalability, as well as enabling more portability across the enterprise and public/private clouds.
The next piece of tech we see that is different from our typical tiered architecture is the service registry. The service registry lies at the heart of our service mesh. Each individual microservice node registers its location and service information with the service registry. When an upstream consumer or web service requires the use of any given microservice it checks the service registry for the information it needs to talk to that service. The client then builds a list of available microservice nodes/locations for that given microservice from the registry and connects directly via the ID list it has created. Normally, a microservice would register its own individual node and port information in the registry allowing for client side load balancing. However, in the hybrid configuration, we will be forcing all microservice nodes to register a DNS name that we have resolving to a VIP on our load balancer. This VIP will sit in front of a load balancer pool that houses the individual microservice nodes. While registering our microservices in this way does limit the elasticity and dynamic nature of the service registry, it does allow us to leverage the existing highly available infrastructure that our internal and external SLAs require of our services. Now, when upstream applications or user gateways require the use of a microservice, it utilizes the dynamic information found in the service registry instead of the statically-configured information found on the local server. The service registry will return a DNS address resolvable to our load balancer VIP and the load balancer terminates the connection to the individual microservice according to our defined layer 7 health checks and load-balancing algorithms.
The way forward
In conclusion, the monolith/mesh hybrid solution gives us a way forward from the architecture of the past, to the architecture of the future. This solution does have its limitations though. It does not allow your organization to leverage the full benefits of the service mesh or its cloud native architecture. It's not very portable and will not migrate into a cloud, or across existing clouds easily. However, it's a way forward that allows your organization to leverage your existing tech and plan a methodical road map to the future. Stay tuned for part 2 of this blog post where we'll lay out the second phase of the roadmap and detail the transition from this hybrid to the full mesh. And as always, please feel free to contact OpenLogic if you'd like to explore more tailored analysis and plans for your individual organization.