Here are some of the projects I’ve been working on:

HyperFaaS#

HyperFaaS is a scalable serverless platform built from the ground up by my team at 3s @ TU Berlin as a research project. I am the main contributor and have been around since the start of the project. It is built entirely in Go and encompasses multiple interacting components. It’s able to run serverless functions as docker containers and auto-scale them based on demand. This project gave me a lot of experience working with Go and platform-like systems.

View on Github

DATEV Collab#

My team and I collaborate with some engineers at DATEV on a project where we are given use cases where a serverless architecture can effectively modernize a legacy system that has no business consuming resources when they are not needed.

We use Kubernetes and Knative to achieve serverless workloads, and we have refactored 2 systems with these technologies until now. For this we had our own self hosted cluster @ TU Berlin , which was very fun to manage. In this project I learned many technologies in the k8s ecosystem, like Helm, ArgoCD (big fan of this!) and Prometheus / Grafana .

OXN#

OXN (or Observability eXperiment Engine) is a chaos-engineering tool developed by folks at ISE @ TU Berlin to test microservice systems. It allows you to inject treatments into them and monitor the effects, generating labeled data. These treatments can be anything, from faults (like network failures, packet loss, pod kills) to configuration of running pods. I participated in a project with a broad scope, where we completely refactored OXN from a simple cli-tool into a client-server architecture (with a UI) that can be deployed inside a k8s cluster, and also added fault-detection features to OXN.

OXN is written in Python. We used FastAPI for the server and NextJS for the frontend. I focused mostly on the backend logic and added features like customizable alert treatments for configuration of fault detection (with prometheus), fault detection analysis that considered false positives and batch-experiments which allowed easier experimentation with parameter variation.

View on Github

FaaS-bench and bachelor Thesis#

In my bachelor Thesis, I performed microbenchmarks on four FaaS serverless plaftorms: AWS Lambda, GCF , Cloudflare and Fly.io. The main metrics I analyzed were latency and elasticity across different dimensions like CPU and memory configurations, geographical distribution of clients and function image sizes. There have been many papers in cloud computing research analysing the performance of cloud platofrms, however, findings rarely stay relevant for more than 2 years due to the very rapid evolution of them. I found Cloudflare Workers to be an incredibly powerful and performant yet limited serverless offering. They are instantly deployed to all edge locations. No concept of regions / availability zones. However they use V8 as a runtime so forget doing anything complicated that requires many dependencies or node runtime.

In FaaS-bench, we are expanding upon my thesis to run even more experiments on more providers (including Fastly, Oracle and Google Cloud Run Functions 2). In the paper, we also provide an in-depth explanation of each novel feature that every platform has. This project gives me the opportunity to use all of the popular clouds, and is making me really despise the DX with Oracle… The repo will soon be open-sourced.

Go Loadgen library#

I spent quite a bit of time using load generators like k6 and locust for research projects. In many cases I always had to switch to my own implementation. Sometimes because these other tools use too much memory, other times because I needed more flexibility in the implementation. It was frustrating to have to re-implement executor code each time I created a new load generator. So I created go_loadgen, a simple library that provides implementations for constant and variable executors and workload pattern generation. It can be used to test http, grpc or any other service, because the client implementation is left to the user.

I am planning to expand it and provide more complex workload pattern generation.

View on Github

benchctl#

As a part of my studies and work, I had to write many different benchmarks. And consistently, I lost a lot of time in plumbing work:

  • Managing a bunch of different ssh connections to different VMs
  • Copying files over different filesystems
  • Running commands everywhere
  • Plotting data
  • Remembering which results belong to certain parameters used for the benchmark runs
  • Managing metadata
  • Quickly comparing different benchmark runs side-to-side

So I decided to write a framework that would take care of all of this for me. It ended up turning into a specialized “workflow engine” of sorts. I also looked into Apache Airflow, but it was too complex for this use case. I am quite proud of how it works and I think I will keep using it forever to manage my benchmarks.

Additionally, this was the first project that I released to the outside world via a package manager. I run Arch Linux, so I published benchctl to the AUR under benchctl-bin. Publishing to the AUR was very easy and straightforward. So that is where I plan to publish most of my OSS projects.

View on Github