omelasticsearch: Elasticsearch Output Module

Module Name:    omelasticsearch

Author:Rainer Gerhards <rgerhards@adiscon.com>

Available since:6.4.0+

Description:

This module provides native support for logging to Elasticsearch.

Action Parameters

Note: parameter names are case-insensitive.

  • server [ http:// | https:// | ] <hostname | ip> [ : <port> ] An array of Elasticsearch servers in the specified format. If no scheme is specified, it will be chosen according to usehttps. If no port is specified, serverport will be used. Defaults to “localhost”.

    Requests to Elasticsearch will be load-balanced between all servers in round-robin fashion.

Examples:
     server="localhost:9200"
     server=["elasticsearch1", "elasticsearch2"]
  • serverport Default HTTP port to use to connect to Elasticsearch if none is specified on a server. Defaults to 9200
  • healthchecktimeout Specifies the number of milliseconds to wait for a successful health check on a server. Before trying to submit events to Elasticsearch, rsyslog will execute an HTTP HEAD to /_cat/health and expect an HTTP OK within this timeframe. Defaults to 3500.

    Note, the health check is verifying connectivity only, not the state of the Elasticsearch cluster.

  • dynSearchIndex<on/off> Whether the string provided for searchIndex should be taken as a rsyslog template. Defaults to “off”, which means the index name will be taken literally. Otherwise, it will look for a template with that name, and the resulting string will be the index name. For example, let’s assume you define a template named “date-days” containing “%timereported:1:10:date-rfc3339%”. Then, with dynSearchIndex=”on”, if you say searchIndex=”date-days”, each log will be sent to and index named after the first 10 characters of the timestamp, like “2013-03-22”.
  • pipelineName The ingest node pipeline name to be inclued in the request. This allows pre processing of events bevor indexing them. By default, events are not send to a pipeline.
  • asyncrepl<on/off> No longer supported as ElasticSearch no longer supports it.
  • usehttps<on/off> Default scheme to use when sending events to Elasticsearch if none is specified on a server. Good for when you have Elasticsearch behind Apache or something else that can add HTTPS. Note that if you have a self-signed certificate, you’d need to install it first. This is done by copying the certificate to a trusted path and then running update-ca-certificates. That trusted path is typically /usr/local/share/ca-certificates but check the man page of update-ca-certificates for the default path of your distro
  • timeout How long Elasticsearch will wait for a primary shard to be available for indexing your log before sending back an error. Defaults to “1m”.
  • template This is the JSON document that will be indexed in Elasticsearch. The resulting string needs to be a valid JSON, otherwise Elasticsearch will return an error. Defaults to:
$template JSONDefault, "{\"message\":\"%msg:::json%\",\"fromhost\":\"%HOSTNAME:::json%\",\"facility\":\"%syslogfacility-text%\",\"priority\":\"%syslogpriority-text%\",\"timereported\":\"%timereported:::date-rfc3339%\",\"timegenerated\":\"%timegenerated:::date-rfc3339%\"}"

Which will produce this sort of documents (pretty-printed here for readability):

{
    "message": " this is a test message",
    "fromhost": "test-host",
    "facility": "user",
    "priority": "info",
    "timereported": "2013-03-12T18:05:01.344864+02:00",
    "timegenerated": "2013-03-12T18:05:01.344864+02:00"
}
  • bulkmode<on/off> The default “off” setting means logs are shipped one by one. Each in its own HTTP request, using the Index API. Set it to “on” and it will use Elasticsearch’s Bulk API to send multiple logs in the same request. The maximum number of logs sent in a single bulk request depends on your maxbytes and queue settings - usually limited by the dequeue batch size. More information about queues can be found here.
  • maxbytes (since v8.23.0) When shipping logs with bulkmode on, maxbytes specifies the maximum size of the request body sent to Elasticsearch. Logs are batched until either the buffer reaches maxbytes or the the dequeue batch size is reached. In order to ensure Elasticsearch does not reject requests due to content length, verify this value is set accoring to the http.max_content_length setting in Elasticsearch. Defaults to 100m.
  • parent Specifying a string here will index your logs with that string the parent ID of those logs. Please note that you need to define the parent field in your mapping for that to work. By default, logs are indexed without a parent.
  • dynParent<on/off> Using the same parent for all the logs sent in the same action is quite unlikely. So you’d probably want to turn this “on” and specify a rsyslog template that will provide meaningful parent IDs for your logs.
  • uid If you have basic HTTP authentication deployed (eg through the elasticsearch-basic plugin), you can specify your user-name here.
  • pwd Password for basic authentication.
  • errorfile <filename> (optional)

    If specified, records failed in bulk mode are written to this file, including their error cause. Rsyslog itself does not process the file any more, but the idea behind that mechanism is that the user can create a script to periodically inspect the error file and react appropriately. As the complete request is included, it is possible to simply resubmit messages from that script.

    Please note: when rsyslog has problems connecting to elasticsearch, a general error is assumed and the submit is retried. However, if we receive negative responses during batch processing, we assume an error in the data itself (like a mandatory field is not filled in, a format error or something along those lines). Such errors cannot be solved by simpy resubmitting the record. As such, they are written to the error file so that the user (script) can examine them and act appropriately. Note that e.g. after search index reconfiguration (e.g. dropping the mandatory attribute) a resubmit may be succesful.

Statistic Counter

This plugin maintains global statistics, which accumulate all action instances. The statistic is named “omelasticsearch”. Parameters are:

  • submitted - number of messages submitted for processing (with both success and error result)
  • fail.httprequests - the number of times a http request failed. Note that a single http request may be used to submit multiple messages, so this number may be (much) lower than fail.http.
  • fail.http - number of message failures due to connection like-problems (things like remote server down, broken link etc)
  • fail.es - number of failures due to elasticsearch error reply; Note that this counter does NOT count the number of failed messages but the number of times a failure occured (a potentially much smaller number). Counting messages would be quite performance-intense and is thus not done.

The fail.httprequests and fail.http counters reflect only failures that omelasticsearch detected. Once it detects problems, it (usually, depends on circumstances) tell the rsyslog core that it wants to be suspended until the situation clears (this is a requirement for rsyslog output modules). Once it is suspended, it does NOT receive any further messages. Depending on the user configuration, messages will be lost during this period. Those lost messages will NOT be counted by impstats (as it does not see them).

Note that some previous (pre 7.4.5) versions of this plugin had different counters. These were experimental and confusing. The only ones really used were “submits”, which were the number of successfully processed messages and “connfail” which were equivalent to “failed.http”.

Samples

The following sample does the following:

  • loads the omelasticsearch module
  • outputs all logs to Elasticsearch using the default settings
module(load="omelasticsearch")
*.*     action(type="omelasticsearch")

The following sample does the following:

  • loads the omelasticsearch module
  • defines a template that will make the JSON contain the following properties
    • RFC-3339 timestamp when the event was generated
    • the message part of the event
    • hostname of the system that generated the message
    • severity of the event, as a string
    • facility, as a string
    • the tag of the event
  • outputs to Elasticsearch with the following settings
    • host name of the server is myserver.local
    • port is 9200
    • JSON docs will look as defined in the template above
    • index will be “test-index”
    • type will be “test-type”
    • activate bulk mode. For that to work effectively, we use an in-memory queue that can hold up to 5000 events. The maximum bulk size will be 300
    • retry indefinitely if the HTTP request failed (eg: if the target server is down)
module(load="omelasticsearch")
template(name="testTemplate"
         type="list"
         option.json="on") {
           constant(value="{")
             constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
             constant(value="\",\"message\":\"")     property(name="msg")
             constant(value="\",\"host\":\"")        property(name="hostname")
             constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
             constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
             constant(value="\",\"syslogtag\":\"")   property(name="syslogtag")
           constant(value="\"}")
         }
action(type="omelasticsearch"
       server="myserver.local"
       serverport="9200"
       template="testTemplate"
       searchIndex="test-index"
       searchType="test-type"
       bulkmode="on"
       maxbytes="100m"
       queue.type="linkedlist"
       queue.size="5000"
       queue.dequeuebatchsize="300"
       action.resumeretrycount="-1")