Apache Camel is the industry standard for reducing boilerplate code for complex integrations, all while maintaining features like automatic error handling, redelivery policies, and the ability to handle complex aggregations. Apache Camel is a domain-specific language that’s an implementation of the Enterprise Integration Patterns and helps organizations solve some very specific problems.
One of the benefits of open source software and utilizing a framework to write an application is that the testing of that framework is outsourced to the community. For example, if you’re utilizing Spring Data to consume data from a Postgres database, the only business logic your team is responsible for writing a test harness for is in your implementation of the Spring framework. Your development team isn’t writing unit tests to validate that Spring Data returns rows and columns as it should from the JDBC datasource, or any of the underlying technology that that framework is built off of. JNDI, the JVM, the Linux Kernel.
There’s a lot that goes into a stack, and one more piece of boilerplate that never needs to be written ever again is the code for handling the routing, transformation, scheduling, error handing, redelivery, and aggregation of batches and streams of messages. This is why Apache Camel is the best resource to handle these complexities.
There’s a little exercise I like to do when introducing Apache Camel to new developers and development managers. I ask about a complex integration problem that the enterprise is currently facing or has written a solution around in the past. A classic one that I’ve heard a few times is the output of a timekeeping system disconnected from an API in a remote site like a factory that needs to be consumed to run payroll. I hear about shell scripts that automate the movement of this file via FTP, and how certain columns need to be dropped for privacy, maybe because of GDPR or State of California privacy requirements regarding PII.
I hear about sed, awk, and complicated, hard-to-test, easy-to-fail transformations that eventually connect to another integration, maybe with curl against a real API into the main Netsuite system for payroll. After the problem is described, I ask about how many total lines of code are used to create the solution (usually hundreds, or thousands if there’s error handling, complex transformations, routing, and aggregation). I then tell the development team that Apache Camel can handle even the most complex of scenarios in 10, maybe 20 lines of code.
Apache Camel can do this because it is an amazing component-driven, message-oriented routing, and normalization framework. It’s based on Spring, which means you can program Camel “routes” as XML, Scala, and my favorite, fluent-style Java code.
Camel provides an Inversion of Control (IoC) approach to data routing that allows for a seamless transition of messaging data between a wide variety of integration components. When we say a wide variety here, we mean hundreds of well-tested components. Consuming from OracleMQ, stripping a sensitive field like SSN from the JSON, and placing those messages onto a Kafka queue? Three lines of code. Check it out below:
The first line consumes messages from JMS, the second uses JsonPath to pull the SSN from the record and replaces all SSN patterns with XXX-XXX-1234, and the third line places the transformed message onto a Kafka topic.
Camels’ URI-based routing methodology allows for the composition of message consumers and producers at runtime based on config flags and environment variables. Meaning that a Docker container can be written to dynamically route messages to and from locations based on operator need.
I recently worked an architecture out with a financial institution where we would utilize Camel as a cron runner, with a definitive “finished” state, and allow Kubernetes to schedule it. Truly, a way to reduce boilerplate for integrations. Their use-case was log shipping from a black box system to ElasticSearch and Kibana for reporting. But there are a ton of integrations, 333 stable ones at the time of writing! Just look at them all here.
Camel works great in modern DevOps environments that rely on containers, and Kubernetes. Compatible with the GraalVM and Quarkus, Camel can create native images that utilize 10% of the same memory of a traditional cloud stack. This is game-changing for distributed cloud applications. See below:
To be sure, though, Camel makes it simple and scalable to attach hundreds of different heterogeneous service endpoints in an efficient way. It’s component-driven approach focuses on the reduction of boilerplate code while writing integration logic.
When you combine this approach of reducing boilerplate and focus on writing unit and integration tests for your specific needs, you spend more time writing code that matters to the enterprise.
Introducing any new open source software framework introduces questions around supporting a framework for larger organizations. If your team doesn’t already have a relationship with an open source support vendor, OpenLogic by Perforce is working with large organizations around the world to support Apache Camel developers, in training, consulting, and everyday support.
If you’re interested in learning more about how we can help your team evaluate Apache Camel, or even support an existing project, connect with an Enterprise Architect from OpenLogic.
CONNECT WITH APACHE CAMEL EXPERT
Enterprise Architect, OpenLogic by Perforce
With over a decade of experience in enterprise software architecture, engineering, and operations for the Fortune 500, Connor is working to build and support cloud native solutions for OpenLogic customers around the world.