Metrics push subsystem actively sends performance metrics to time series storage servers: InfluxDB, Prometheus (via Push Gateway), Graphite and OpenTSDB. Data are scanned using JMX scans defined using perfmon
library. Push outputs are configured in metrics.bsh
script.
Metrics are represented by name, value, timestamp and a set of named attributes. Administrator can filter which metrics are to be sent and which attributes will be sent. Also it is possible to add extra attributes (constant), for example for identifying agent, application or environment/location.
Filtering metrics
There are two settings that can be used for filtering metrics: metrics.include
and metrics.exclude
. If there are no included metrics defined, agent will send all metrics that do not match any exclusion filter. Inclusions/exclusions are defined as named value maps, for example:
metrics.include.zorka = zorka:type=ZorkaStats,*
metrics.include.java = java.lang:*
metrics.exclude.diag = zorka:type=ZorkaStats,name=Diagnostic,*
metrics.exclude.tomcat = Catalina:*
Basic filtering form is similar to JMX object name masks with attribute name extension and regular expressions if prefixed with ~
character:
domain.name:attr1=val1,attr2=mask2*,attr3=*,attr4=~[a-z].*,metric,*
There is special attribute without value that represents metric name. Only one such attribute can be defined. For example:
zorka:name=HttpStats,type=ZorkaStats,attr=stats,tag=ALL,calls
Example above is closed filter. It means that any sample having more attributes than stated in filter expression will not match. Previous example represents open filter that accepts additional attributes. As metric name can be either mask or regular expression, there is one ambivalence that is *
symbol. It cannot be used as metric mask, omitting metric name in filter expressions means metric with any name.
Constant attributes
Additional attributes can be defined using agent.attrs
settings - the same as for tracers. This is attribute map, for example:
agent.attrs.host = ${&zorka.hostname}
agent.attrs.app = MYAPP
agent.attrs.env = PROD
agent.attrs.location = DC1
Attribute filtering
Attribute filters can be used to limit amount of attributes sent to external systems.
Filters are configured in similar way to metric filters.
metrics.attr.exclude = ...
- comma-separated list of attributes to be excluded fromsubmitted metrics; due to performance considerations these must be exact names, not masks;metrics.attr.include = ...
- comma-separated list of attributes to be included insubmitted metrics; if present all other attributes will be excluded;
Configuring outputs
There are several integration modules for several time series storage systems. Agent can work with several systems at once. By default there in one configuration for each supported system type but it is possible to define additional outputs for the same system type (eg. InfluxDB), see metrics.bsh
script for more details.
InfluxDB
InfluxDB metrics are sent via HTTP using InfluxDB text format. Available options:
influxdb = no
- enables InfluxDB output;influxdb.url = http://localhost:8086
- URL to data server;influxdb.db = zorka
- database name;influxdb.user = <username>
- user name when authentication enabled;influxdb.pass = <password>
- password when authentication enabled;influxdb.prefix = zorka
- prefix added to all metric names;influxdb.chunk.size = 4096
- limits amount of data sent by individual HTTP requests;
OpenTSDB
OpenTSDB metrics are sent via HTTP using JSON API. Available options:
opentsdb = no
- enables OpenTSDB output;opentsdb.url = http://localhost:4242
- URL to data server;opentsdb.report = none
- error reporting (none
for no report,summary
for summaryreport,detail
for detailed report);opentsdb.prefix = zorka
- prefix added to all metric names;opentsdb.chunk.size = 4096
- limits amount of data sent by individual requests;
Note that OpenTSDB has its own chunk size limit that cannot be exceeded. Limit set in agent must be lower. See tsd.http.request.enable_chunked
, tsd.http.request.max_chunk
properties in OpenTSDB configuration.
Graphite Integration
Graphite metrics are sent via TCP using graphite text format. Available options:
graphite = no
- enables Graphite output;graphite.addr = 127.0.0.1:2003
- address and port on which Graphite is listening;graphite.prefix = zorka
- prefix added to all metric names;graphite.chunk.size = 4096
- limits amount of data sent by individual requests;
Prometheus Push Gateway
Prometheus is now supported only in push mode. Note that this is generally not recommended, yet this is sometimes useful and for now Zorka lacks support for standard scrapping mode. Data is sent via HTTP using prometheus text format. Available options:
prometheus.push = no
- enables Prometheus Push Gateway output;prometheus.push.url = http://localhost:9191
- URL to Push Gateway;prometheus.push.prefix = zorka
- prefix added to all metric names;prometheus.push.chunk.size = 4096
- limits amount of data sent by individual requests;
Prometheus Scrapping Mode
Prometheus metrics are exposed via HTTP server built into agent. In order to enable metrics endpoint, internal HTTP server and prometheus have to be enabled:
zorka.http = yes
prometheus = no
Metrics will be available at port 8641 of application server: http://appsrv1.myapp.comm:8641/metrics
Appendix A: Setting up InfluxDB
Easiest way to run InfluxDB instance is to start Docker container with it:
$ mkdir influxdb ; cd influxdb
$ docker run -p 8086:8086 -v $PWD:/var/lib/influxdb influxdb
This will expose InfluxDB on port 8086 of docker host and store datafiles in local working directory.
Now database needs to be created using CREATE DATABASE
query:
$ curl -X POST -G http://localhost:8086/query --data-urlencode "q=CREATE DATABASE zorka"
Command should return something like this:
{"results":[{"statement_id":0}]}
If database was created badly for some reason, it can be removed using DROP DATABASE
query:
curl -X POST -G http://localhost:8086/query --data-urlencode "q=DROP DATABASE zorka"
For more information about InfluxDB, installation instructions on Linux distributions see official documentation.