omkafka

rsyslog 8.31.0 (v8-stable) released

Today, we release rsyslog 8.31. This is probably one of the biggest releases in the past couple of years. While it also offers great new functionality, what really important about it is the focus on further improved software quality. For a more detailed description, please read Rainer’s blog post. Detailed information about the huge list of changes is available in the changelog.

http://blog.gerhards.net/2017/11/rsyslog-831-important-release.html

The packages have received some notable changes as well. First off, we were able to implement the Redis output module as a separate package on Ubuntu 14.04 and newer. Also there was a dependency change for the ommongo module, thus it is now only available on Ubuntu 16.04 or newer, but not on CentOS/RHEL anymore. Platform restrictions are unavoidable right now due to dependency availability.

ChangeLog:

rsyslog 8.28.0 (v8-stable) released

We have released rsyslog 8.28.0.

This release features a lot of changes. Again, the most notable change is a way more robust, yet still experimental, support for Kafka output and input. In addition to this, there is a new build requirement for librelp 1.2.14 du to API requirements in imrelp and many changes/fixes for omfwd, imfile, mmdblookup, imtcp and many more.

Please note that Kafka Support requires the librdkafka library as dependency, which itself has some new dependencies.

For a complete list of changes, fixes and enhancements, please visit the ChangeLog.

The packages will follow when they are finished.

ChangeLog:

rsyslog 8.18.0 (v8-stable) released

We have released rsyslog 8.18.0.

This is mostly a bug-fixing release. Among the big number of fixes are a few additions to the testbench and some minor enhancements for several modules (like redis, omkafka, imfile) to provide more convenience.

To get a full overview over the changes, please take a look at the Changelog.
ChangeLog:

Changelog for 8.18.0 (v8-stable)

Version 8.18.0 [v8-stable] 2016-04-19

  • testbench: When running privdrop tests testbench tries to drop
    user to “rsyslog”, “syslog” or “daemon” when running as root and
    you don’t explict set RSYSLOG_TESTUSER environment variable.
    Make sure the unprivileged testuser can write into tests/ dir!
  • templates: add option to convert timestamps to UTC
    closes https://github.com/rsyslog/rsyslog/issues/730
  • omjournal: fix segfault (regression in 8.17.0)
  • imptcp: added AF_UNIX support
    Thanks to Nathan Brown for implementing this feature.
  • new template options
    • compressSpace
    • date-utc
  • redis: support for authentication
    Thanks to Manohar Ht for the patch
  • omkafka: makes kafka-producer on-HUP restart optional
    As of now, omkafka kills and re-creates kafka-producer on HUP. This
    is not always desirable. This change introduces an action param
    (reopenOnHup=”on|off”) which allows user to control re-cycling of
    kafka-producer.
    It defaults to on (for backward compatibility). Off allows user to
    ignore HUP as far as kafka-producer is concerned.
    Thanks to Janmejay Singh for implementing this feature
  • imfile: new “FreshStartTail” input parameter
    Thanks to Curu Wong for implementing this.
  • omjournal: fix libfastjson API issues
    This module accessed private data members of libfastjson
  • ommongodb: fix json API issues
    This module accessed private data members of libfastjson
  • testbench improvements (more tests and more thourough tests)
    among others:

    • tests for omjournal added
    • tests for KSI subsystem
    • tests for priviledge drop statements
    • basic test for RELP with TLS
    • some previously disabled tests have been re-enabled
  • dynamic stats subsystem: a couple of smaller changes
    they also involve the format, which is slightly incompatible to
    previous version. As this was out only very recently (last version),
    we considered this as acceptable.
    Thanks to Janmejay Singh for developing this.
  • foreach loop: now also iterates over objects (not just arrays)
    Thanks to Janmejay Singh for developing this.
  • improvements to the CI environment
  • enhancement: queue subsystem is more robst in regard to some corruptions
    It is now detected if a .qi file states that the queue contains more
    records than there are actually inside the queue files. Previously this
    resulted in an emergency switch to direct mode, now the problem is only
    reported but processing continues.
  • enhancement: Allow rsyslog to bind UDP ports even w/out specific
    interface being up at the moment.
    Alternatively, rsyslog could be ordered after networking, however,
    that might have some negative side effects. Also IP_FREEBIND is
    recommended by systemd documentation.
    Thanks to Nirmoy Das and Marius Tomaschewski for the patch.
  • cleanup: removed no longer needed json-c compatibility layer
    as we now always use libfastjson, we do not need to support old
    versions of json-c (libfastjson was based on the newest json-c
    version at the time of the fork, which is the newest in regard
    to the compatibility layer)
  • new External plugin for sending metrics to SPM Monitoring SaaS
    Thanks to Radu Gheorghe for the patch.
  • bugfix imfile: fix memory corruption bug when appending @cee
    Thanks to Brian Knox for the patch.
  • bugfix: memory misallocation if position.from and position.to is used
    a negative amount of memory is tried to be allocated if position.from
    is smaller than the buffer size (at least with json variables). This
    usually leads to a segfault.
    closes https://github.com/rsyslog/rsyslog/issues/915
  • bugfix: fix potential memleak in TCP allowed sender definition
    depending on circumstances, a very small leak could happen on each
    HUP. This was caused by an invalid macro definition which did not rule
    out side effects.
  • bugfix: $PrivDropToGroupID actually did a name lookup
    … instead of using the provided ID
  • bugfix: small memory leak in imfile
    Thanks to Tomas Heinrich for the patch.
  • bugfix: double free in jsonmesg template
    There has to be actual json data in the message (from mmjsonparse,
    mmnormalize, imjournal, …) to trigger the crash.
    Thanks to Tomas Heinrich for the patch.
  • bugfix: incorrect formatting of stats when CEE/Json format is used
    This lead to ill-formed json being generated
  • bugfix omfwd: new-style keepalive action parameters did not work
    due to being inconsistently spelled inside the code. Note that legacy
    parameters $keepalive… always worked
    see also: https://github.com/rsyslog/rsyslog/issues/916
    Thanks to Devin Christensen for alerting us and an analysis of the
    root cause.
  • bugfix: memory leaks in logctl utility
    Detected by clang static analyzer. Note that these leaks CAN happen in
    practice and may even be pretty large. This was probably never detected
    because the tool is not often used.
  • bugfix omrelp: fix segfault if no port action parameter was given
    closes https://github.com/rsyslog/rsyslog/issues/911
  • bugfix imtcp: Messages not terminated by a NL were discarded
    … upon connection termination.
    Thanks to Tomas Heinrich for the patch.

rsyslog 8.15.0 (v8-stable) released

We have released rsyslog 8.15.0.

This release sports a lot of changes. Among the changes are a lot of bugfixes, changes to the KSI support, pmciscoios, omkafka, 0mq modules, omelasticsearch and many more.

To get a full overview over the changes, please take a look at the Changelog.
ChangeLog:

http://www.rsyslog.com/changelog-for-8-15-0-v8-stable/

Download:

http://www.rsyslog.com/downloads/download-v8-stable/

As always, feedback is appreciated.

Best regards,
Florian Riedl

Connecting with Logstash via Apache Kafka

Original post: Recipe: rsyslog + Kafka + Logstash by @Sematext

This recipe is similar to the previous rsyslog + Redis + Logstash one, except that we’ll use Kafka as a central buffer and connecting point instead of Redis. You’ll have more of the same advantages:

  • rsyslog is light and crazy-fast, including when you want it to tail files and parse unstructured data (see the Apache logs + rsyslog + Elasticsearch recipe)
  • Kafka is awesome at buffering things
  • Logstash can transform your logs and connect them to N destinations with unmatched ease

There are a couple of differences to the Redis recipe, though:

  • rsyslog already has Kafka output packages, so it’s easier to set up
  • Kafka has a different set of features than Redis (trying to avoid flame wars here) when it comes to queues and scaling

As with the other recipes, I’ll show you how to install and configure the needed components. The end result would be that local syslog (and tailed files, if you want to tail them) will end up in Elasticsearch, or a logging SaaS like Logsene (which exposes the Elasticsearch API for both indexing and searching). Of course you can choose to change your rsyslog configuration to parse logs as well (as we’ve shown before), and change Logstash to do other things (like adding GeoIP info).

Getting the ingredients

First of all, you’ll probably need to update rsyslog. Most distros come with ancient versions and don’t have the plugins you need. From the official packages you can install:

If you don’t have Kafka already, you can set it up by downloading the binary tar. And then you can follow the quickstart guide. Basically you’ll have to start Zookeeper first (assuming you don’t have one already that you’d want to re-use):

bin/zookeeper-server-start.sh config/zookeeper.properties

And then start Kafka itself and create a simple 1-partition topic that we’ll use for pushing logs from rsyslog to Logstash. Let’s call it rsyslog_logstash:

bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic rsyslog_logstash

Finally, you’ll have Logstash. At the time of writing this, we have a beta of 2.0, which comes with lots of improvements (including huge performance gains of the GeoIP filter I touched on earlier). After downloading and unpacking, you can start it via:

bin/logstash -f logstash.conf

Though you also have packages, in which case you’d put the configuration file in /etc/logstash/conf.d/ and start it with the init script.

Configuring rsyslog

With rsyslog, you’d need to load the needed modules first:

module(load="imuxsock")  # will listen to your local syslog
module(load="imfile")    # if you want to tail files
module(load="omkafka")   # lets you send to Kafka

If you want to tail files, you’d have to add definitions for each group of files like this:

input(type="imfile"
  File="/opt/logs/example*.log"
  Tag="examplelogs"
)

Then you’d need a template that will build JSON documents out of your logs. You would publish these JSON’s to Kafka and consume them with Logstash. Here’s one that works well for plain syslog and tailed files that aren’t parsed via mmnormalize:

template(name="json_lines" type="list" option.json="on") {
  constant(value="{")
  constant(value="\"timestamp\":\"")
  property(name="timereported" dateFormat="rfc3339")
  constant(value="\",\"message\":\"")
  property(name="msg")
  constant(value="\",\"host\":\"")
  property(name="hostname")
  constant(value="\",\"severity\":\"")
  property(name="syslogseverity-text")
  constant(value="\",\"facility\":\"")
  property(name="syslogfacility-text")
  constant(value="\",\"syslog-tag\":\"")
  property(name="syslogtag")
  constant(value="\"}")
}

By default, rsyslog has a memory queue of 10K messages and has a single thread that works with batches of up to 16 messages (you can find all queue parameters here). You may want to change:
– the batch size, which also controls the maximum number of messages to be sent to Kafka at once
– the number of threads, which would parallelize sending to Kafka as well
– the size of the queue and its nature: in-memory(default), disk or disk-assisted

In a rsyslog->Kafka->Logstash setup I assume you want to keep rsyslog light, so these numbers would be small, like:

main_queue(
  queue.workerthreads="1"      # threads to work on the queue
  queue.dequeueBatchSize="100" # max number of messages to process at once
  queue.size="10000"           # max queue size
)

Finally, to publish to Kafka you’d mainly specify the brokers to connect to (in this example we have one listening to localhost:9092) and the name of the topic we just created:

action(
  broker=["localhost:9092"]
  type="omkafka"
  topic="rsyslog_logstash"
  template="json"
)

Assuming Kafka is started, rsyslog will keep pushing to it.

Configuring Logstash

This is the part where we pick the JSON logs (as defined in the earlier template) and forward them to the preferred destinations. First, we have the input, which will use to the Kafka topic we created. To connect, we’ll point Logstash to Zookeeper, and it will fetch all the info about Kafka from there:

input {
  kafka {
    zk_connect => "localhost:2181"
    topic_id => "rsyslog_logstash"
  }
}

At this point, you may want to use various filters to change your logs before pushing to Logsene/Elasticsearch. For this last step, you’d use the Elasticsearch output:

output {
  elasticsearch {
    hosts => "localhost" # it used to be "host" pre-2.0
    port => 9200
    #ssl => "true"
    #protocol => "http" # removed in 2.0
  }
}

And that’s it! Now you can use Kibana (or, in the case of Logsene, either Kibana or Logsene’s own UI) to search your logs!

rsyslog 8.7.0 (v8-stable) released

We have released rsyslog 8.7.0.

Version 8.7.0 contains various improvements and additions to a wide array of modules, like imfile, imptcp, improvements to RainerScript and mmnormalize (thanks to Singh Janmejay) and a couple of other improvements. But, the biggest addition is the new omkafka module that now allows direct writing to Apache Kafka.

This release also contains important bug fixes.

This is a recommended upgrade for all users.

 

ChangeLog:

http://www.rsyslog.com/changelog-for-8-7-0-v8-stable/

Download:

http://www.rsyslog.com/downloads/download-v8-stable/

As always, feedback is appreciated.

Best regards,
Florian Riedl

Scroll to top