TNS
VOXPOP
You’re most productive when…
A recent TNS post discussed the factors that make developers productive. You code best when:
The work is interesting to me.
0%
I get lots of uninterrupted work time.
0%
I am well-supported by a good toolset.
0%
I understand the entire code base.
0%
All of the above.
0%
I am equally productive all the time.
0%
Cloud Native Ecosystem / Kubernetes / Microservices

The Case for Containerizing Middleware

May 2nd, 2017 6:00am by
Featued image for: The Case for Containerizing Middleware

It’s one thing to accept the existence of middleware in a situation where applications are being moved from a “legacy,” client/server, n-tier scheme into a fully distributed systems environment. For a great many applications whose authors have long ago moved on to well-paying jobs, containerizing the middleware upon which they depend may be the only way for them to co-exist with modern applications in a hybrid data center.

It’s why it’s a big deal that Red Hat is extending its JBoss Fuse middleware service for OpenShift. It’s also why Cloud Foundry’s move last December to make its Open Service Broker API an open standard can be viewed as a necessary event for container platforms.

But wait a minute. What do we think we’re doing? Wasn’t the whole point of containerization to isolate distributed functions and make them stateless so we could scale them up and down freely?  Should middleware, even in the broadest sense, be a permanent part of our container environments for its own sake? Or do we really intend to splash asterisks next to every one of the 12 factors in the 12-Factor App Methodology?

“Thing #1”

There is a tradeoff by moving from a monolith at one extreme to extremely fine-grained microservices at the other end of the spectrum, said Mike Piech, Red Hat’s vice president and general manager for middleware, speaking with The New Stack.

“If you’re inside a monolith, you’re writing a big blob of code that’s essentially got this outer boundary to its world,” Piech said. “Everything inside that boundary is local, you’re sharing memory, you’re not going back-and-forth across a network that introduces latency. If you make a whole bunch of assumptions and cut a whole bunch of corners, a few things can go really fast. You have visibility into everything that’s around you. Up to a certain scale, that’s fine. That approach for building chunks of functionality served us for decades.  But it’s really hard to change any small piece of that application.”

Microservices architecture, he noted, does give developers the luxury of changing smaller components of code without fear of getting caught in the spaghetti. The tradeoff comes in production, he said, when the existence of the network and the co-existence of everything else cohabiting that network, introduces latencies with which those granular elements cannot often adequately contend.

It’s his argument in favor of a middleware approach: specifically, for the introduction of Red Hat’s Fuse, its enterprise service bus and its implementation of Apache Camel. At scale, services need a mediator of communication and an arbiter of requests — roles which the orchestrator is too busy staging components on the platform, to serve.

“When I speak in front of crowds these days, I typically talk about the benefits of the microservices approach,” remarked Piech. “By decomposing what would traditionally have been a monolithic application into a finer-grained set of components, each of which is independently life-cyclable, scalable, replaceable, swappable, you enable your overall business to be more able. It’s able to make small, incremental changes much more rapidly, and with much less risk of disrupting the rest of the business.

“With all that said … in order to do microservices, I still have to have some discipline around APIs. I have to do some things differently as a developer. Each microservice has to have a well-defined API, and the developer has to be respectful of consistency and backward compatibility. Thing #1, you’ve got to get into the API mindset.  Like any software development approach, doing it well is partly what technologies you use, and partly how you use them.”

This leads to Piech’s pitch for Fuse, and thus for the presence of middleware (or something like it) in containerization: Having an arbiter of communication sharing the same scope as the application — put another way, on the opposite side of the wall of abstraction from the orchestrator — frees the developer to consider what a service is going to communicate through the API, rather than how it’s going to say it or when.  Thus a service bus can do for distributed systems what service busses did for client/server applications and n-tier applications: play the role of the more congenial proxy.

The Other Type of Cloud Development

This leads us to a line of inquiry that is important not just for its architectural implications, but its political ones as well: Does the incorporation of a methodology created outside the cloud, and first brought to light before there was a cloud, which introduces a hard dependency upon a component that is presumably local, truly constitute cloud-native development?

“Usually we divide the world into what we call cloud-enabled kinds of environments and applications, and cloud-native or cloud-centric environments and applications,” said Andre Tost, IBM’s distinguished engineer.

“I think in the context of middleware, it’s more in the cloud-enabled space,” Tost told The New Stack. “That means I have applications that weren’t built with a microservices architecture in mind, but that were built with the cloud factor in mind, that are based on three-tier or n-tier kinds of architectures using middleware. And now, we want to start benefitting from some of the cloud computing principles.”

Tost and Kyle Schlosser, who serves the Office of IBM’s Hybrid Cloud CTO, co-authored last August a guide to building Docker container images with built-in middleware. It introduces WebSphere developers to what Docker is and how it works, using language that may be more familiar to them. It then goes so far as to suggest that middleware inclusion wasn’t exactly what anyone who created container architectures had in mind.

So the guide makes this suggestion: Let the Docker image of the middleware be effectively incomplete, like a template. By means of a build process, a middleware provider may then add relevant packages to that image. Then the application provider may completely customize the image through the addition of the application and its specific customization. “Eventually,” Tost and Schlosser wrote, “Docker images for middleware must provide proper extension points that make it easy to add an application.”

This gets to Tost’s definition of cloud enablement: taking an application, be it completed or conceptual, and refactoring it for cloud deployment. Before you dismiss the idea as applying only to retrofitting old, monolithic applications, consider the fact that most of the world’s software developers are bringing their skills with them. Starting over from scratch is not the course of action they’re likely to take first.

“Another aspect of cloud-enabling is that we simply take everything we have, and we put it into containers,” said Tost. “That’s what we were trying to address there [in the article]. I don’t know if the article makes that clear, but I’m fairly skeptical of doing this to begin with. I’m saying that because, I’ve spoken with companies that say, ‘Well, we’ll take everything we have and we won’t change anything, other than we’re going to go from virtual machines to containers.’ And I think they haven’t thought all the way through [to] what the expected benefit is going to be in the first place.”

Prior to the advent of containers, Tost said, IT automation worked like this: You create a VM, install an OS on it, install agents that look inside the operating environment, install the scripts that deploy the middleware and deploy server clusters, and then finally install the application.

From the perspective of Tost’s customers, the change that containerization represents is felt first and foremost in the sense of automation. It’s the container build process that changes everything: the fact that the image is built just before the application is set to run. The stack is reversed; the application comes first, and the other pieces go on top.  That’s why Tost’s and Schlosser’s article begins with the Dockerfile and the build process.

At this point in the customer journey, if you will, whether or not middleware comes into play has not yet become a question. Why would it?

In Defense of Liberty

“In the past, we saw the application server as a sort of manager of application server workloads, but it was also a runtime for those workloads,” remarked Kyle Schlosser, speaking with us.  “We’ve seen that responsibility for the management of the workloads fall to the container service itself, such as Kubernetes. But the runtimes continue to be runtimes.  They’re perhaps factored into smaller units, and we don’t run as many applications in the same process space, as part of microservices architecture.

“When we try to bring traditional middleware to a container platform like Kubernetes, in that case, we have two pieces of software trying to perform the same functions,” Schlosser continued.  “They can work against each other, but in other cases, they can co-exist just fine.  We recognize, though, that the shift is happening.”

For that reason, he told us, IBM is investing in the creation of what it calls the Liberty MicroProfile, an instantiation of an architecture that provides a small, highly encapsulated subset of Java EE functionality, in a component designed specifically to be used with microservices.

“We need a Java form factor that appeals to those Java developers who are already transitioning to microservices development,” said Schlosser. “So we’re looking at, within the MicroProfile, not only how to make it more lightweight but how to provide capabilities which such a developer would be interested in.”

In IBM’s world, there are cloud-enabled design principles and cloud-native principles. And among those two, there are significant overlaps. Ironically, it may be these components designed to enable some of the old methodologies to work in the new environment that end up facilitating, and even expediting, evolutionary changes to the new environment for its own sake. For example, Schlosser notes, the notion of transactions in databases was created in a time of limited multithreading and sequential, synchronous logic. The asynchronicity of microservices design may end up unraveling much of that concept, thus altering the atomicity of database transactions.

Imagine a much more “holographic paradigm” (with apologies to Ken Wilber) where an analytics routine could explore multiple, separate alternatives to a chain of events concurrently, and you’ll see what I’m getting at. Think of how fraud detection might be influenced if a routine could peer into the implications of a particular event having not happened. A microservices architecture that omitted any notion of sequence or synchronicity, might be incapable of such a feat.

“Some of these existing things in architectures will probably never go away,” said IBM’s Tost. “And that’s entirely driven not by how the world should be, but the fact that these environments seem to linger forever.  CICS, COBOL, even RPG and the AS/400 are still around, and will probably stick around for twenty years. And I think something like J2EE-style middleware or MQ-style middleware is not going to disappear anytime soon, if ever — again, not because of technical desire or because of benefits, but simply because of the volume of workloads that have been built on top of them that will be maintained going forward, and will probably never be rewritten.”

Title image, entitled, “The Height of Communications in the 21st Century,” by Bob Frankston (yes, that Bob Frankston, the co-creator of VisiCalc), from his ongoing collection of thousands of whap-jawed, slapdash communications line repair jobs, mostly in Massachusetts, on his personal Flickr feed.

Cloud Foundry and Red Hat are sponsors of The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.