“THANOS” — Monitoring with Prometheus and Grafana
When I started working as a DevOps Engineer, I was introduced with the concept of Monitoring, Either it’s infrastructure level monitoring, application level monitoring or something like that. Let’s consider a scenario where we have an infrastructure of more than five hundred servers, In any case we are bound to do the pro-active monitoring of our whole infrastructure very efficiently and smartly in order to meet the service level agreements and keep the system up and running. If we are any sort of service provider then we must keep an eagle’s eye on whole of our stack.
When I was a bit new I saw monitoring of a huge infrastructure with sensu, where I got the basic concept of how monitoring of the whole Infrastructure is being done, Moreover I also saw the transition where the modern monitoring techniques are being used and at that point I was introduced with some tools for monitoring like sensu, prometheus and grafana etc.
Baseline Monitoring Setup with Prometheus and Grafana
In order to setup a baseline monitoring setup in a containerized environment, you need to have a basic understanding of Docker and prometheus. Here I have wrote a blogpost previously where I have setup the stack in an easy and simple way.
Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.
Why Integrate Prometheus with Thanos?
When practicing monitoring with prometheus, there were certain capabilities and areas that needs upgrade in order to meet requirements such as:
- Storing historical data in a reliable and cost-efficient way
- Accessing all metrics, whether they are new or old using a single-query API
- Merging replicated data collected via Prometheus high-availability (HA) setups
Thanos comes into play here allowing a user to create multiple instances of Prometheus, deduplicate data, and archive data in long-term storage like AWS-S3 or other providers. Thanos introduces multiple key components in order to cover the short comings and taking the monitoring to the next level. We will be having a look at those components individually in order to have a better understanding of the whole stack. It introduces a component named as sidecar which resides along with the prometheus server, A querying mechanism/component to respond to the queries from PromQL, It basically pulls data from different Prometheus servers, merges it, and deduplicates based on the Promql queries. The third one is compactor where the data from the prometheus server (sidecar) is being uploaded, It also sampled the huge data metrics sets into smaller chunks. The last component is a Ruler which provides a global set of rules for different Prometheus servers. This global set is stored in a shared object store location for high availability purposes.
Thanos Components Architecture Overview
Here is the official and complete Architecture diagram for implementation of Thanos with prometheus.
In our scenario, let’s create a simple dockerized version of the whole stack in order to practice the implementation of Thanos and its components with prometheus. For this purpose we will implement it in a simple manner using docker. Firstly what I have done is here I have create a simplified block diagram for the purpose of understanding the whole concept and applying it in local containerized environment and testing it.
Let’s write a docker-compose file to deploy the complete stack. we will do the following steps;
a) Setting up a Prometheus Server.
b) Enabling Thanos Sidecar for that specific Prometheus instance.
c) Deploying Thanos Querier with the ability to talk to Sidecar.
d) Deploying Thanos Store to retrieve metrics data stored in long-term storage (in this case, Local file system).
e) Setting up Thanos Compactor for data compaction and downsampling.
f) Configuration of a node exporter container
g) At last, here is Grafana for visualization along with the docker-compose volumes and network stuff
Once the docker-compose file is being completed, let’s have a look at the local prometheus configuration and our local file system storage through which thanos store and thanos sidecar will be able to reach and talk.
Here is the local file system storage configuration inside the bucket.yml file, which configures the local file storage;
Here is the prometheus configuration file, it depends on us how many jobs we want to aadd, Here just for seeing more visualizations I have added node_exporter and other thanos components also.
- job_name: prometheus
- "localhost:9090"- job_name: 'nodeexporter'
- "node-exporter:9100"- job_name: thanos-sidecar
- "sidecar0:10902"- job_name: thanos-store
- "store:10906"- job_name: thanos-query
Let’s Execute it and See the Results
All the configuration has been now completed firstly we we will create the folder and files structure as shown above, we will run the following command and it will create our whole cluster as shown;
$ docker-compose up -d
Once all the containers are up and running, let’s have a look at the results. First we will access the Query UI, which looks identical to the Prometheus UI:
htttp://localhost:10904 and here in the picture I have write a simple query up to see the nodes monitored.
Here, everything seems correct related to the prometheus and other containers, now we can have a look at the node_exporter logs via
htttp://localhost:9100 . Now finally the grafana results are important to see and look. For doing this we can jump to
htttp://localhost:3000 and login to grafana. Once logged in we can add the data source, obviously in our case it will be the thanos query component which will be providing the logs to grafana according to the architecture diagram.
Selecting the prometheus data source and then moving forward to add the thanos query component as a source inside.
Finally we are at the Grafana dashboard, we can easily set up any query we want and see the results as shown in the screenshot below. It will look more or less like this;
In A Nutshell
We have seen and test how Thanos is being used along with prometheus and how it can be deployed with a basic docker setup. We have seen how it interacts with Prometheus and the type of advantages it can give us. It is now clear from the concept that how Thanos Sidecar and Storage are inherently advantageous when it comes to scaling, in relation to a typical Prometheus setup. Moreover if we talk about Thanos Query, we can analyze that how a single metric collection point is a good thing for bigger environments. Lastly, downsampling through the use of the Thanos Compactor seems like a performance no brainer. Large datasets can be easily handled using the downsampling method.
Hopefully in the upcoming blogpost, we can deploy it in Kubernetes environment and see how efficiently things work there.
Reading all the components of Thanos, Now I completely know the usage of all the powers Thanos have in his hands;