How we reduced logs costs by moving from Elasticsearch to Grafana Loki

16 December, 13:30, «01 Hall. Tigran»

Abstracts

Elasticsearch cluster with billions of log lines can consume terrabytes of disk size. Grafana Loki can be a good candidate for storing and querying logs in large environments. In this talk we will focus on maximizing Loki’s performance and on a task of transferring logs to it in an efficient manner.

Elastic Stack was until a certain time the de facto standard for collecting and processing logs for Kubernetes clusters. However, it is known to be pretty demanding on computing resources such as CPU, RAM and disk usage. Therefore, new players appear on the market, offering alternative solutions, and one of them is Grafana Loki. In case you decide that you need to change the logging stack, there are several problems and questions that need to be answered.

In this talk I will share our experience at KTS of migrating logs from Elasticsearch cluster to Loki, what difficulties we encountered along the way, how we solved them, and how much money we saved in the end.

We’ll also discuss topics such as:
* Architectural differences between the ELK/EFK stack and Grafana Loki
* How Loki allows you to save a lot on the logging infrastructure
* How not to get into a cloud provider’s vendor-lock - here we will analyze the principles of boltdb-shipper and interaction with an S3 storage
* What “knobs” you can tweak in the Loki configuration for it to work at maximum performance
* And most importantly - what to do if the logs are currently in the Elasticsearch cluster, and how to transfer them to Loki in adequate time - I will share our own experience and solution.

The talk was accepted to the conference program