OpenFaaS - Run Containerized Functions On Your Own Terms

For a long time, serverless has been just a synonym of AWS Lambda for me. Lambda offered a convenient way to attach an arbitrary piece of code to a platform event like a cloud instance changing its state, a DynamoDB record update, or a new SNS message. But also, from time to time, I would come up with a piece of logic that is not big enough to deserve its own service and at the same time doesn't fit well with the scope of any of the existing services. So, I'd often also put it into a function to invoke it later with a CLI command or an HTTP call.

I moved away from AWS a couple of years ago, and since then, I've been missing this ease of deploying serverless functions. So I was pleasantly surprised when I learned about the OpenFaaS project. It makes it simple to deploy functions to a Kubernetes cluster, or even to a VM, with nothing but containerd.

Intrigued? Then read on!

Read more

Learning Containers From The Bottom Up

When I started using containers back in 2015, my initial understanding was that they were just lightweight virtual machines with a subsecond startup time. With such a rough idea in my head, it was easy to follow tutorials from the Internet on how to put a Python or a Node.js application into a container. But pretty quickly, I realized that thinking of containers as of VMs is a risky oversimplification that doesn't allow me to judge:

  • What's doable with containers and what's not
  • What's an idiomatic use of containers and what's not
  • What's safe to run in containers and what's not.

Since the "container is a VM" abstraction turned out to be quite leaky, I had to start looking into the technology's internals to understand what containers really are, and Docker was the most obvious starting point. But Docker is a behemoth doing a wide variety of things, and the apparent simplicity of docker run nginx can be deceptive. There was plenty of materials on Docker, but most of them were:

  • Either shallow introductory tutorials
  • Or hard reads indigestible for a newbie.

So, it took me a while to pave my way through the containerverse.

I tried tackling the domain from different angles, and over the years, I managed to come up with a learning path that finally worked out for me. Some time ago, I shared this path on Twitter, and evidently, it resonated with a lot of people:

This article is not an attempt to explain containers in one go. Instead, it's a front-page for my multi-year study of the domain. It outlines the said learning path and then walks you through it, pointing to more in-depth write-ups on this same blog.

Mastering containers is no simple task, so take your time, and don't skip the hands-on parts!

Read more

Containers vs. Pods - Taking a Deeper Look

Containers could have become a lightweight VM replacement. However, the most widely used form of containers, standardized by Docker/OCI, encourages you to have just one process service per container. Such an approach has a bunch of pros - increased isolation, simplified horizontal scaling, higher reusability, etc. However, there is a big con - in the wild, virtual (or physical) machines rarely run just one service.

While Docker tries to offer some workarounds to create multi-service containers, Kubernetes makes a bolder step and chooses a group of cohesive containers, called a Pod, as the smallest deployable unit.

When I stumbled upon Kubernetes a few years ago, my prior VM and bare-metal experience allowed me to get the idea of Pods pretty quickly. Or so thought I... ๐Ÿ™ˆ

Starting working with Kubernetes, one of the first things you learn is that every pod gets a unique IP and hostname and that within a pod, containers can talk to each other via localhost. So, it's kinda obvious - a pod is like a tiny little server.

After a while, though, you realize that every container in a pod gets an isolated filesystem and that from inside one container, you don't see processes running in other containers of the same pod. Ok, fine! Maybe a pod is not a tiny little server but just a group of containers with a shared network stack.

But then you learn that containers in one pod can communicate via shared memory! So, probably the network namespace is not the only shared thing...

This last finding was the final straw for me. So, I decided to have a deep dive and see with my own eyes:

  • How Pods are implemented under the hood
  • What is the actual difference between a Pod and a Container
  • How one can create Pods using Docker.

And on the way, I hope it'll help me to solidify my Linux, Docker, and Kubernetes skills.

Read more

Learn-by-Doing Platforms for Dev, DevOps, and SRE Folks

There are many resources for people who want to learn Linux, Containers, or Kubernetes. However, most of these resources don't come with an interactive, hands-on learning experience. You can read tens of fine blog articles and watch hundreds of engaging YouTube videos, maybe even take some courses with theoretical quizzes at the end, but it's doubtful you'll master any of the above technologies without actively practicing them.

Theoretical-only knowledge of, say, Kubernetes doesn't really count. Hands-on exercises should be a must-have learning element. Some resources, including this blog, strive to provide reproducible instructions so that students can try out the new skills. However, for that, a running system is needed. Setting up such a system can make the learning curve substantially steeper or even make the task fully unbearable for inexperienced students.

So, where can a student practice the new skills?

One option is to experiment on a real staging (or production ๐Ÿ™ˆ) environment. But it can be quite harmful. Luckily, there is an alternative. Some learning platforms offer interactive playgrounds mimicking real-world setups. On these platforms, students can SSH into disposable Linux servers, or even access multi-server stages right from their browsers!

Experimenting with the new skills in such sandboxes makes the learning hands-on. At the same time, these platforms free students from the need for provisioning playgrounds. It brings students closer to real-world environments while keeping the learning process safe - playgrounds can always be destroyed and recreated without damaging any real production systems.

I got so fascinated by the idea of interactive playgrounds recently that I spent a week researching platforms that provide in-browser learn-by-doing experience. Below are my findings, alphabetically ordered:

Read more

How HTTP Keep-Alive can cause TCP race condition

These mysterious HTTP 502s happened to me already twice over the past few years. Since the amount of service-to-service communications every year goes only up, I expect more and more people to experience the same issue. So, sharing it here.

TL;DR: HTTP Keep-Alive between a reverse proxy and an upstream server combined with some misfortunate downstream- and upstream-side timeout settings can make clients receiving HTTP 502s from the proxy.

Read more