Helm: Past, Present, Future



Kromhout: Welcome to our QCon Online 2020 Session, wherein we look at the past, present and future of the Helm project. I’m Bridget Kromhout. With me are Helm maintainers, who can give us insight into this open source software project, which today is graduated CNCF project used by upwards of 70% of all those using Kubernetes itself.

Helm’s Origin Story

Let’s look at how that got there. To set the stage for that we have Matt Butcher. Matt, would you like to give us Helm’s superhero origin story?

Butcher: In 2015, I was working for a company called Deis. Deis was popular for work, making a PaaS solution called Deis Workflow. We had been looking at a number of different orchestration systems. A lot of the work we were doing was already container-oriented. We were pretty convinced that containerized workloads and schedulers were going to be the next big thing. After doing some R&D work, we decided that Kubernetes was really looking like the most promising solution, and so we decided to go all in on Kubernetes development. In order to get everybody on board with this, we had a two-day all-company meeting. This brought in everyone from marketing and communications through engineering and ops. We all got together in a room, discussed Kubernetes, discussed what we were going to do, how that was going to change our plans. Then we decided to do a hackathon project, and just break everybody into teams, and just try some coding. People did all kinds of different things, and had lots of different ideas.

The team that I got together with was Jack Francis and Rimas. The three of us sat down and said, “What would it look like if we built a package manager for Kubernetes?” For every nook and cranny of time we could get for the next couple of days, we kept working on this little thing. At the time, it was called K8S Place. We thought that was really cute. We built this little simple package manager for Kubernetes. At stake in this entire thing was a $75 gift card for Amazon. We really wanted that $75 Amazon gift card. We worked really hard. Got up there at the presentation thing at the end of this two-day all-hands and presented our hokey little package manager. It was cool. It was close, but we did end up — and I know this is the most important part of the story — winning the $75 gift card, which I consequently squandered on coffee.

The next day after the hackathon was over, the CTO and CEO of the company called me in. That’s one of those things where you’re like, “Uh-oh, what did I do?” They said, “Hey, we were thinking. That K8S Place thing was kind of cool. A package manager for Kubernetes might be a really interesting thing to build. The inaugural KubeCon is coming up in only a month. Let’s do it, let’s turn it into a big product, turn it into a product that we can share there.” I said, “Okay.” Then he said, “Just one thing. We’re not really wild about the name K8S Place. Do you think you can come up with something else?” I went, “Yeah, sure.” Jack Francis and I sat down with a nautical dictionary, which was very interesting experience for both of us because neither of us spent any period of our time on the ocean or on boats or anything.

We just kept reading through words out loud back and forth to each other until he said out loud, “Helm.” I went, “Helm? That sounds like a good idea.” Then we worked a little bit on packages and came up with Charts. That kind of really got us rolling. Then we spent the first couple of months developing that. Showed it off at the inaugural KubeCon. That KubeCon was very small, but it was a lot of fun. Everybody there hyper interested. Kubernetes 1.2 had just come out, and so everybody was really interested in what we could do with it.

Few months after that, Google called us and said, “Do you want to come up to Seattle, visit us, and we can sit down and talk about some new ways of pushing Helm forward?” We went up there, and sat down, and did a big design session. That’s where Helm 2 came from. Out of that meeting, we came out with a plan and started working on that. Within eight months, we had dozens and dozens of companies, hundreds of engineers all contributing bits and pieces to the Chart repository, to Helm itself. That’s where Helm 2 dropped.

Then, a couple of years ago, we started saying, “Helm hasn’t quite kept up with the recent developments in Kubernetes. We need to revisit some of the assumptions that we had made back in the Kubernetes 1.2 days and see if they still hold true in Kubernetes 1.12.” They didn’t. Helm 3 was born out of this effort to try and figure out how to evolve the project, really meet the way the clusters are working today, and meet the way that the systems that DevOps teams are using today. That’s how Helm got started.

Kromhout: That is a whole heap of an origin story. This is actually fascinating because you start unpacking that and you think, “There’s the Helm project itself, but the way people use it, there’s a lot more to it than that.” The community is vast and contains multitudes. This is an interesting scaling problem.

Scaling Problem

We also have Matt Farina here to tell us a little bit about how that works.

Farina: I think scaling the community is the interesting part, because the community isn’t just work on the core Helm client that everybody uses, but it’s all these charts. It’s all these packages that you can install. When I joined on to the Helm project, which was a little later, I joined on to help the Helm Charts. When I joined on, there were less than 100 charts in the stable and incubator repositories. These are chart repositories that have been hosted by the Helm project. It was growing.

One of the first things that I did when I came in, it was how do we figure out how to manage this growth? Because at the time, you had a handful of charts maintainers who managed all of the charts, they managed all of the applications every time a pull request, every time a change request came in. How did that work? It turns out, that’s a lot of work when you look at all of this [inaudible 00:06:29] and doing it by hand. As you wanted to scale up without burning out your maintainers, especially in these charts, what do you do? You add automation. We spent a fair amount of time, in that first bit when I joined on as a charts maintainer, figuring out, how do you automate, and how do you scale the community? How do you scale managing all of these applications out of that central repository?

Most of that work was in the stable repository, because it’s stable applications. How do you scale that? We added layers of automation to do those things that people were manually checking. That was the only way we could have scaled because, as Helm was growing, there’s a lot more than 100 applications. The repository just grew. Both a number of applications and the amount of activity, each chart was getting. You just saw this scaling grow. This all happened in the Helm 2 era.

When Helm 2 came out, it was Helm 2, Alpha 2 added the stable repository, added by default to Helm 2. When Helm 2 came out, you just saw this massive growth in charts and applications. We had to figure out automation. We actually got to a point, and we went so far as we brought in the Kubernetes automation that has owners’ files, we can have individual people owning individual charts, being able to merge things into those charts. We actually couldn’t handle the scaling problem anymore, even with all this automation we added on.

A couple of years ago, Helm was designed with distributed charts in mind. You could have charts and all of these different repositories, add different repositories and install them. It’s been like that for years. We weren’t taking advantage of it because everybody was putting everything into the stable repository. We decided to make the intentional move to distributed charts, because there were pain points. People who wanted to maintain their charts, they all had to use one workflow that we came up with. They couldn’t do their own workflows. They couldn’t integrate it with their own application releases. They had to wait on charts maintainers to do things. Sometimes that was frustrating for both them and us to do the work.

We wanted to go do a distributed model. We took advantage of Helm’s ability to do that. We did some things such as stand up the Helm Hub, which has now been overcome by the Artifact Hub, which will search for more than…


Read More:Helm: Past, Present, Future