Search⌘ K
AI Features

Use Instrumentation to Provide More Detailed Metrics

Explore how to implement instrumented metrics within Kubernetes applications to gain detailed insights into performance. Understand how to use Prometheus client libraries to track response times with labels like service, code, method, and path. Learn to query and analyze these metrics to pinpoint issues more effectively and improve application monitoring.

Issue with current metrics

We shouldn’t just say that the go-demo-5 application is slow. That would not provide enough information for us to quickly inspect the code in search of the exact cause of slowness. We should be able to do better and deduce which part of the application is misbehaving. Can we pinpoint a specific path that produces slow responses? Are all methods equally slow, or is the issue limited only to one? Do we know which function produces slowness? There are many similar questions we should be able to answer in situations like that, but we can’t with the current metrics. They are too generic, and they can usually only tell us that a specific Kubernetes resource is misbehaving. The metrics we’re collecting are too broad to answer application-specific questions.

The metrics we explored so far are a combination of exporters and instrumentations. Exporters are in charge of taking existing metrics and converting them into the Prometheus-friendly format. An example would be Node Exporter which is taking “standard” Linux metrics and converting them into Prometheus's time-series ...