TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
API Management / Containers / DevOps / Tech Culture

How Docker Stepped up the Game for Containers

Oct 27th, 2015 2:36pm by
Featued image for: How Docker Stepped up the Game for Containers

As successful as Docker is today, it is hard to fathom that the project got its start just two years ago from a struggling PaaS provider called dotCloud, which decided to open source its orchestration framework. But Docker offered something that other container technologies had missed until then, namely the ability to seamlessly bridge the worlds of development and operations, according to Joyent CTO Bryan Cantrill. Yes, DevOps.

“For all those years, we talked about the operational efficiency of containers. What Docker does is actually use containers to deliver a development efficiency, making it easier to develop your app, not just deploy it. It allows developers to think operationally,” Cantrill said, speaking at this year’s Container Summit in San Francisco. “That is a big deal.”

In an an enthusiastic and wide-ranging talk, Cantrill touched on not only the history behind containers, but how the future of using containers within one’s development stack continues to evolve.

With new advances in technology comes more reason to delve into where the concept of containers came from, as the history of what is now known as container-based workflow had humble beginnings set in software isolation. Containers on the whole are not a new concept, having been widely used in some respect for decades. But their promise is just starting to unfold.

From Chroot to Jail

Although IBM had been playing with the concept since the 1970s, containers for today’s server-era really began with Bill Joy, then at Sun Microsystems, who initially began using the Unix chroot operation as a way to isolate software so that it wouldn’t damage a system.

“Things reflect their origins.” --  Bryan Cantrill, CTO, Joyent

“Things reflect their origins.” – Bryan Cantrill, CTO, Joyent

Running software in a chroot environment was the first instance where software was contained to run stand-alone, apart from being able to impact or influence the system it was being run on. Chroot was by no means foolproof, with a variety of ways to break out of it, or view the system underneath. FreeBSD aimed to solve these pain points by introducing a concept known as jails, where untrusted software could be run independently in an environment without compromising a system.

Sun Microsystems saw the direction that FreeBSD was aiming for with jails and hoped to take it a step further by introducing Zones. Solaris Zone containers, introduced in 2005, are used for more than just isolating applications; they are able to consolidate them while offering stronger virtualization to runs apps in such a way that application software is isolated in its own container. Sun coupled Zones with resource management tools, allowing for developers to interact with them as is needed by a project. And Zones secured boundaries by which containers in different zones cannot interact with one another’s processes.

Containers build upon this concept by isolating pieces of software, applications or microservices and then orchestrating them into many small parts that work together — though they ultimately remain independent themselves. System discovery tools allow for containers to view and interact with one another. As such, chroot, Solaris Zones, and FreeBSD jails laid the groundwork for the container architecture in place today on platforms such as Docker.

Cantrill also noted that OS virtualization is useful for taking different applications and running them together, though it falls short when faced with virtualizing and consolidating the entire technology stack, including the OS.

The Honeywell 200

The Honeywell 200, the machine that made virtualization necessary.

IBM’s hardware level virtualization solved this issue back in the 1970s; though, as an older technology, it came with its own issues.  The IBM System 360 was among one of the most important pieces of computing hardware in history. IBM hoped the System 360 would be the machine to consolidate all existing machines, with the IBM 360 instruction set serving as the Lingua Franca.

This approach, however, required developers to re-write their applications in the IBM 360 instruction set, which was deemed very ineffective, especially for those programs already written for the IBM 1400 series.

Capitalizing on this, Honeywell came out with the Honeywell 200, which could run software written for the IBM 1401. Dubbed the Liberator, this machine allowed users to run IBM 1401 code on an H200. As a result of Honeywell’s virtualization of the 1401 platform, IBM added 1401 virtualization to the 360.

“45% of Y2K problems came from a single architecture, IBM 1401. The IBM 1401 is a very old machine, and software is still running on it.” – Bryan Cantrill, Joyent

If one wants to run software indefinitely, hardware virtualization will allow for this to happen without an issue. As technology continued to progress, further abstraction was needed. Applications are not running on actual system hardware, but rather through a series of pipelines appearing to the application to be something they are not. Cantrill notes that OS-based virtualization is the only thing that makes sense moving forward.

Back to the Future

Which brings us back to Docker. With Docker, Developers can integrate “all their dependencies into what is effectively a static binary,” one that can be shipped into production, Cantrill said. A developer can create an application on a laptop, then run it in production unaltered.

“Docker is doing to applications what apps did to TAR. We’ve up-leveled our thinking, allowing developers to think operationally” -- Bryan Cantrill, Joyent

“Docker is doing to applications what apps did to TAR. We’ve up-leveled our thinking, allowing developers to think operationally” — Bryan Cantrill, Joyent

Cantrill notes that many common issues cited as “Docker” pain-points are not truly Docker issues at all. Deploying Docker in a VM has been cited as the key to running containers securely, though this severely undermines their performance. There is still a need to be able to deploy containers “on the metal.”

Joyent offers a solution which aims to push container technology forward with SmartOS, a Type 1 Hypervisor based on Illumos.

SmartOS can run Docker containers in production, with the ability to provision new container instances via API through Joyent’s Smart Data Center. This virtualization negates the need to provision VMs, with developers responsible for only the allocation of containers. As such, Cantrill notes that software development teams must move from an allocation mindset to a consumption mindset as the future introduces more open source offerings within the container ecosystem.

The rise of open source software development introduces rival frameworks for everyone, offering development teams the ability to choose what solution truly will be best for their app throughout every stage of its lifestyle, while allowing for easier pivoting as more client use cases are taken on.

To realize the full potential of containers, developers must shift the way they approach problems, Cantrill. To utilize the true power of a container-based technology stack, developers must move away from VMs and hardware virtualization. As the Docker landscape evolves, software developers must streamline, adapt, and modify their technology stacks to ensure that application development continues to be a fluid process along with it.

Joyent, Docker and IBM are sponsors of  The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.