Get insights on your Play Framework application with Dropwizard Metrics- 4 mins
At some point of the application development, all of us are reaching the point, when we need more insights into what is happening inside our applications or need monitoring of the application. In case of the Play Framework there is already a drop in solution provided by a great open source tool Kamon and it’s module kamon-play.
But today we are going to talk about an alternative solution, Drowizard Metrics formerly known as
Codahale Metrics, and its integration and usage with Play Framework.
Well, at this point I started to look around wondering if there are any libraries that could provide integration of these two.
I found multiple libraries, but not complete though. For example:
- metrics-scala - Great library, neat API and Scala support, but in terms of Play Framework, just not enough.
metrics-play - One of the first modules that Google is trying to
satisfy you and get away, but it’s already abandoned and isn’t compatible with the latest versions of the Play Framework and Metrics. There is a fork though, that is updated to the latest library versions, so I decided to give it a try.
Unfortunately metrics-play module has a very basic support of the environment Dropwizard Metrics provides. It should be enough if you just need basic metrics which are exposed through REST api, but I had higher requirements and ended up extending the functionality of the module and building following modules:
- metrics-reporter-play - Metrics reporter integration with Play Framework.
- metrics-annotation-play - Metrics Annotations Support for Play Framework through Guice AOP.
That’s what we are going to talk about next.
Metrics reporter integration with Play Framework
Metrics provides a powerful toolkit of ways to measure the behavior of critical components in your production environment.
As well as expose measured data through reporters, that’s a great way of pushing the stats from your application to preferred storage backend.
At the time of writing the article, the supported reporters are:
- console - Reports metrics periodically to the console.
- graphite - Reports metrics periodically to Graphite.
But Metrics library and community also provide various reporters like
ElasticSearch Reporter and other.
Adding factories for reporters is no-brainer.
Metrics Annotations Support for Play Framework through Guice AOP.
By default to create and track metrics, we should invoke metric registry, create metrics and so on. Let’s take a look:
To keep it
DRY there are annotations that you can use, the module will create and appropriately invoke a Timer for
@Timed, a Meter for
@Metered, a Counter for
@Counted, and a Gauge for
@ExceptionMetered is also supported, this creates a Meter that measures how often a method throws exceptions.
The above example can be rewritten like:
or you can even annotate the class, to create metrics for all declared methods of it:
Of course this is supported only for the classes that are instantiated through Guice and there are certain limitations.
Lets finally try to use these libraries and see how it all works in a real application. The example project is hiding in the GitHub Repo.
I’m using the
activator play-scala template with sbt plugin. We should add
JCenter resolver and dependencies, in the end it will look something like:
For the example I’ll be using the
Console reporter, so let’s add the configuration to our
As you can see I disabled
logback metrics, and added the reporter that will periodically report metrics to
And we already can start using annotations. I’ll annotate the
index action of
In real application you don’t have to use all annotations at once, as
@Timed will already create
After we start the app and access the
Main Page, it will output metrics to
And of course you still can access the metrics through REST api, by adding route configuration to your
What is next?
Metrics also provides you a way to perform application health checks(a small self-tests) and this can be a great addition. For more information check the docs.
To build a proper environment there should be more ways to report the metrics. This can be a good way to go as well.
Better Future support
Currently if you want to measure the execution time of the
Future it should be done manually. This can be a good improvement.
Hdrhistogram provides alternative high quality reservoir implementations which can be used in histograms and timers.