Search Results for: install

RSyslog Windows Agent license document – EULA

RSyslog Windows Agent
Version 8.0 Final Release
End User License Agreement
2025-01-21

This binary code license (“License”) contains rights and restrictions associated with use of the accompanying software and documentation (“Software”). Read the License carefully before installing the Software. By installing the Software you agree to the terms and conditions of this License.

1. Limited License Grant.
Adiscon grants you a non-exclusive License to use the software free of charge for 30 days for the purpose of evaluating whether to purchase a commercial license of RSyslog Windows Agent. After this period, users are required to purchase proper licenses if you continue to use it. If Customer has purchased RSyslog Windows Agent licenses, Customer is permitted to use the purchased product edition under the terms of this license agreement.

2. Copyright
The software is confidential copyrighted information of Adiscon GmbH, Germany. You shall not modify, decompile, disassemble, decrypt, extract, or otherwise reverse engineer the software. The software may not be leased, assigned, or sublicensed, in whole or in part. A separate license is required for each computer being monitored by the RSyslog Windows Agent.

3. Trademarks and logos
RSyslog Windows Agent is a trademark of Adiscon. Windows is a registered trademark of Microsoft Corporation. All other trademarks and service marks are the property of their respective owners.

4. Licensed syslog
A RSyslog Windows Agent license covers the installation of one system. RSyslog Windows Agent comes in several editions, only the Enterprise Version is permitting the Syslog Listener Service. The RSyslog Windows Agent Enterprise Edition permits you to receive messages from an unlimited number of devices. Please note that instances of Adiscon’s EventReporter and/or RSyslog Windows Agent do NOT count against the remote device count. So you may use a RSyslog Windows Agent Server Professional edition to receive data from 500 servers with Adiscon EventReporter installed on them.

5. Licensed remote file monitor Clients
RSyslog Windows Agent can be used to monitor text files on remote systems. This remote monitoring requires a proper license. RSyslog Windows Agent comes in several editions, each of them permitting remote file monitoring for a different number of remote systems, You must purchase the edition of RSyslog Windows Agent that reflects the number of different remote systems being monitored by RSyslog Windows Agent. The RSyslog Windows Agent Basic Edition does not permit you to monitor files locally or on remote systems. The RSyslog Windows Agent Professional Edition permits you to monitor files locally and on up to 10 remote systems. The RSyslog Windows Agent Enterprise Edition permits you to monitor files on an unlimited number of remote systems.

Please note that only the number of remote systems counts toward the license. So if you monitor 50 files on a single remote system, this counts as a single license. If you monitor a single file on each one of 50 remote systems, than this counts as 50 licenses.

6. Licensed remote event log monitor clients
RSyslog Windows Agent can be used to monitor Windows event logs on remote computers. A full RSyslog Windows Agent license is required for each remote computer on which Windows event logs are being monitored. Technically, the product might count licenses based on the number of remote event log monitors configured. In such cases, a license is required for each remote event log monitor configured.

7. Product Editions
A specific edition of the RSyslog Windows Agent product is licensed. Only the specifically licensed version may be used. For example, if a RSyslog Windows Agent Basic Edition is licensed, features of the Professional edition may not be used. The license keys are also technically different, that is a Basic edition license key is technically different from a Professional Edition license key. Thus, a Basic edition license key can not be used to unlock Professional features.

8. Evaluation period
The product comes with a free 30 day evaluation period. We strongly encourage all customers to evaluate the product’s fitness for their systems and environment during the evaluation period. Customer agrees to install the product on production systems only after it has proven to be acceptable on similar test systems.

9. Redistribution
Everybody is granted permission to redistribute the install set if the following criteria are meet:

– the install set, product and all documentation (including this license) are
supplied unaltered
– there is no registration key distributed along with the install set. REGISTRATION
KEYS ARE SOLE INTENDED FOR THE ORIGINAL CUSTOMER. IT IS COPYRIGHT
FRAUD TO DISTRIBUTE REGISTRATIONS KEYS.
– the redistributor is only allowed to charge a nominal fee if the product is included
into a commercial distribution set (e.g. shareware CD collection). For a CD collection,
we deem a fee of up to US$ 30 to be reasonable.
– redistribution as part of a book companion CD is OK, as long as the books purpose is
not only to cover a CD software collection in which case we deem a cost of US$ 50
for the book to be OK.

10. Disclaimer of warranty
The Software is provided “AS IS,” without a warranty of any kind. ALL EXPRESS OR IMPLIED REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE HEREBY EXCLUDED.

11. Limitation of liability
ADISCON SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED BY YOU OR ANY THIRD PARTY AS A RESULT OF USING OR DISTRIBUTING SOFTWARE. IN NO EVENT WILL ADISCON BE LIABLE FOR ANY LOST REVENUE, PROFIT OR DATA, OR FOR DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF THE USE OF OR INABILITY TO USE SOFTWARE, EVEN IF ADISCON HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IT IS THE CUSTOMERS RESPONSIBILITY, TO USE THE EVALUATION PERIOD TO MAKE SURE THE PRODUCT CAN RUN WITHOUT PROBLEMS IN CUSTOMERS ENVIRONMENT.

12. Severability
The user must assume the entire risk of using the program. IN NO EVENT WILL ADISCON BE LIABLE FOR ANY DAMAGES IN EXCESS OF THE AMOUNT ADISCON RECEIVED FROM YOU FOR A LICENSE TO THE SOFTWARE, EVEN IF ADISCON SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES, OR FOR ANY CLAIM BY ANY OTHER PARTY.

13. Termination
The License will terminate automatically if you fail to comply with the limitations described herein. On termination, you must destroy all copies of the Software.

14. Product Parts Not Covered by this License
RSyslog Windows Agent may include optional parts which are not covered by this license. For example, a free web-based interface for log access (phpLogCon) may be part of the RSyslog Windows Agent package. Any package not licensed under the term of this license will have prominent note as well as its own license document. If in doubt, the RSyslog Windows Agent service, the RSyslog Windows Agent configuration Program and the Interactive Syslog Server as well as accompanying documentation are licensed under this agreement.

14. High risk activities
The Software is not fault-tolerant and is not designed, manufactured or intended for use or resale as on-line control equipment in hazardous environments requiring fail-safe performance, such as in the operation of nuclear facilities, aircraft navigation or communication systems, air traffic control, direct life support machines, or weapons systems, in which the failure of the Software could lead directly to death, personal injury, or severe physical or environmental damage (“High Risk Activities”). Adiscon specifically disclaims any express or implied warranty of fitness for High Risk Activities.

15. General Provisions
This license agreement shall be governed and interpreted in accordance with the substantive law of Germany applicable to contracts made and performed there. The place of performance of the agreement is Germany, Grossrinderfeld notwithstanding where the Customer is situated or any servers are located.

If any provision of this license agreement shall be held void or unenforceable by a court of competent jurisdiction, it shall be severed from this agreement and shall not affect the remaining provisions. Void clauses are to be construed in such a way that the business purpose of said clauses as envisaged by both parties can be realized in a lawful manner. Except as expressly set forth in this agreement, the exercise by either party of any of its remedies under this agreement will be without prejudice to its other remedies under this agreement or otherwise.

rsyslog error 2291

Error in RELP processing.

If this message occurs:
imrelp: could not activate relp listner, code 10046

This means that the platform does not provide TLS auth support. In general, this means that GnuTLS is too old.

Here is the recommended solution:
Install a current version of GnuTLS on that system and rebuild librelp from source.

Alternatively you can try to use TLS in anonymous mode! Please note that this does not guard against man-in-the-middle attacks, it at least keeps message flow encrypted.

This is a stub entry: If you have questions please post a comment or visit the github issue tracker.

rsyslog error 2209

Module (config name) is unknown.

In almost all most cases, this message is caused by

  1. a typo in the module name
  2. module not being installed

Please note that rsyslog usually is packaged via a core package (usually called “rsyslog”) and feature packages (usually called “rsyslog-feature”). The core package contains only what is frequently needed. For extra functionality, you need to install the feature package. For example, omelasticseach is usually packaged in a package named along the lines of “rsyslog-omeleasticsearch” or “rsyslog-elasticsearch”. The actual package name depends on your distro and/or custom repository used (e.g. rsyslog’s package repositories).

Please note that not all modules are packaged. It depends on the distro and/or package maintainer. Especially some very infrequently modules are often not packaged in official distros. Some are inside the rsyslog repositories. The same holds true for contributed modules which are not officially supported by the rsyslog core team.

Check the rsyslog repositories to see if you can install a module from there if your distro does not provide it. If  you do not find the module there, you need to build it from source.

If you have further questions please post a comment or visit the github issue tracker.

rsyslog error 2066

Module could not be loaded – problem in dlopen().

A problem occurred during dlopen()ing a loadable module (e.g. an input or output plugin).

Most common causes for this are that the module either does not exist or has incorrect permission settings. Permission problems are less likely if rsyslogd runs as root, what it does by default.

Note that most packages only distribute a very limited set of plugins via the core rsyslog package. To use enhanced functionality (like the file input, database or mail outputs and many more), you need to install additional packages (there names are distro-specific, so they can not be given here).

When building from source, check the ./configure output on which modules are built. A summary is given at the end of the configure run. Use “./configure -help” to see which options enable which modules.

This is a stub entry: If you have questions please post a comment or visit the github issue tracker.

Data Privacy Policy

Our website may be used without entering personal information. Different rules may apply to certain services on our site, however, and are explained separately below. We collect personal information from you (e.g. name, address, email address, telephone number, etc.) in accordance with the provisions of German data protection statutes. Information is considered personal if it can be associated exclusively to a specific natural person. The legal framework for data protection may be found in the German Federal Data Protection Act (BDSG) and the Telemedia Act (TMG). The provisions below serve to provide information as to the manner, extent and purpose for collecting, using and processing personal information by the provider.

Adiscon GmbH
Mozartstr. 21
97950 Großrinderfeld
Germany
+49-9349-9298530
info@adiscon.com

Please be aware that data transfer via the internet is subject to security risks and, therefore, complete protection against third-party access to transferred data cannot be ensured.

Cookies

Our website makes use of so-called cookies in order to recognize repeat use of our website by the same user/internet connection subscriber. Cookies are small text files that your internet browser downloads and stores on your computer. They are used to improve our website and services. In most cases these are so-called “session cookies” that are deleted once you leave our website.

To an extent, however, these cookies also pass along information used to automatically recognize you. Recognition occurs through an IP address saved to the cookies. The information thereby obtained is used to improve our services and to expedite your access to the website.

You can prevent cookies from being installed by adjusting the settings on your browser software accordingly. You should be aware, however, that by doing so you may not be able to make full use of all the functions of our website.

Server Data

For technical reasons, data such as the following, which your internet browser transmits to us or to our web space provider (so called server log files), is collected:

– type and version of the browser you use
– operating system
– websites that linked you to our site (referrer URL)
– websites that you visit
– date and time of your visit
– your Internet Protocol (IP) address.

This anonymous data is stored separately from any personal information you may have provided, thereby making it impossible to connect it to any particular person. The data is used for statistical purposes in order to improve our website and services.

Contacting Us

On our website we offer you the opportunity to contact us, either by email and/or by using a contact form. In such event, information provided by the user is stored for the purpose of facilitating communications with the user. No data is transferred to third parties except as mentioned in this policy. Nor is any of this information matched to any information that may be collected by other components of our website.

Posting Comments

On our website we offer you the opportunity to post comments about individual articles. For this purpose, the IP address of the user/internet connection subscriber is stored, as well as the email address used. This information is stored for our security in the event the author through his/her comments infringes against third party rights and/or unlawful content is entered. Consequently, we have a direct interest in the author’s stored data, particularly since we may be potentially liable for such violations. Having some identifying information is also necessary if you wish to have all of your data removed – otherwise we could not find it. No data is transferred to third parties. Nor is any of this information matched to any information that may be collected by other components of our website.

Use of Google Analytics with anonymization

Our website uses Google Analytics, a web analysis service from Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043 USA, hereinafter referred to as “Google“. Google Analytics employs so-called “cookies“, text files that are stored to your computer in order to facilitate an analysis of your use of the site.

The information generated by these cookies, such as time, place and frequency of your visits to our site, including your IP address, is transmitted to Google’s location in the US and stored there.

We use Google Analytics with an IP anonymization feature on our website. In doing so, Google abbreviates and thereby anonymizes your IP address before transferring it from member states of the European Union or signatory states to the Agreement on the European Economic Area.

Google will use this information to evaluate your usage of our site, to compile reports on website activity for us, and to provide other services related to website- and internet usage. Google may also transfer this information to third parties if this is required by law or to the extent this data is processed by third parties on Google´s behalf.

Google states that it will in never associate your IP address with other data held by Google. You can prevent cookies from being installed by adjusting the settings on your browser software accordingly. You should be aware, however, that by doing so you may not be able to make full use of all the functions of our website.

Google also offers a disabling option for the most common browsers, thus providing you with greater control over the data which is collected and processed by Google. If you enable this option, no information regarding your website visit is transmitted to Google Analytics. However, the activation does not prevent the transmission of information to us or to any other web analytics services we may use. For more information about the disabling option provided by Google, and how to enable this option, visit https://tools.google.com/dlpage/gaoptout?hl=en

Use of Google AdSense

Our website employs Google AdSense. Google AdSense is a service of Google Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043 USA, for incorporating advertisements. Google-AdSense uses so-called “cookies“, text files, that are stored to your computer and which provide analysis of the use of our website. Furthermore, Google AdSense uses so-called “web beacons“. Web beacons allow Google to analyze information, such as visitor traffic to our website. This information, along with your IP address and the ad format displayed, is transmitted to Google in the US where it is stored and may be transferred by Google to contracting partners. However, Google does not merge your IP-address with other stored data on you. You can prevent cookies from being installed by adjusting the settings on your browser software accordingly. You should be aware, however, that by doing so you may not be able to make full use of all the functions of our website. By using our website you declare that you agree to the processing of data collected about you by Google in the manner previously described and for the purposes there specified.

Use of Google reCAPTCHA

Our website uses Google reCAPTCHA to fight SPAM. We would like to avoid doing this, but unfortunately the SPAM problem has become so bad that without reCAPTCHA we would need to turn off commenting (and we actually did so a couple of month in early 2018). We limit the use of reCAPTCHA to pages where it is absolutely needed, e.g. when you enter comments.  As reCAPTCHA is a Google service, it is subject to Google’s Privacy Policy and Terms of Use.

Information/Cancellation/Deletion

On the basis of the Federal Data Protection Act, you may contact us at no cost if you have questions relating to the collection, processing or use of your personal information, if you wish to request the correction, blocking or deletion of the same, or if you wish to cancel explicitly granted consent. Please note that you have the right to have incorrect data corrected or to have personal data deleted, where such claim is not barred by any legal obligation to retain this data.

YouTube

We use YouTube on our website. This is a video portal operated by YouTube LLC, 901 Cherry Ave, 94066 San Bruno, CA, USA, hereinafter referred to as “YouTube”.

YouTube is a subsidiary of Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA 94043 USA, hereinafter referred to as “Google”.

Through certification according to the EU-US Privacy Shield

https://www.privacyshield.gov/participant?id=a2zt000000001L5AAI&status=Active

Google and its subsidiary YouTube guarantee that they will follow the EU’s data protection regulations when processing data in the United States.

We use YouTube in its advanced privacy mode to show you videos. The legal basis is Art. 6 Para. 1 lit. f) GDPR. Our legitimate interest lies in improving the quality of our website. According to YouTube, the advanced privacy mode means that the data specified below will only be transmitted to the YouTube server if you actually start a video.

Without this mode, a connection to the YouTube server in the USA will be established as soon as you access any of our webpages on which a YouTube video is embedded.

This connection is required in order to be able to display the respective video on our website within your browser. YouTube will record and process at a minimum your IP address, the date and time the video was displayed, as well as the website you visited. In addition, a connection to the DoubleClick advertising network of Google is established.

If you are logged in to YouTube when you access our site, YouTube will assign the connection information to your YouTube account. To prevent this, you must either log out of YouTube before visiting our site or make the appropriate settings in your YouTube account.

For the purpose of functionality and analysis of usage behavior, YouTube permanently stores cookies on your device via your browser. If you do not agree to this processing, you have the option of preventing the installation of cookies by making the appropriate settings in your browser. Further details can be found in the section about cookies above.

Further information about the collection and use of data as well as your rights and protection options in Google’s privacy policy found at

https://policies.google.com/privacy

Sample Data Privacy Policy Statement provided by the Law Offices of Weiß & Partner

Major CentOS7 RPM changes

We made some major changes to the way the RPMs for CentOS7/RHEL7 are built. We have adapted the spec file definitions of the base repo to build our own RPMs after we detected some trouble with the last released version. That means, that some things will also change, so our RPMs are more like the official ones.

Stock CentOS 7 8.24.0 package to 8.32.0-1 package upgrade

The upgrade completes and the same functionality present before is present here. Because the syntax was obsolete legacy format before and the format is obsolete legacy format now the /etc/rsyslog.d/listen.conf file passes validation checks (rsyslogd -N6) without issue.

That said, the /etc/rsyslog.d/listen.conf file doesn’t really do anything because the /etc/rsyslog.conffile disables local logging and the /usr/lib/systemd/system/rsyslog.repo unit file doesn’t enable socket activation (basically the symlink from /etc/systemd/system/syslog.service to /usr/lib/systemd/system/rsyslog.service wasn’t created and systemd doesn’t create the /run/systemd/journal/syslog socket for rsyslog to read from).

Not a problem here because the conf file was stock before and is still stock (now upstream Adiscon copy), so imjournal is used to pull log messages (API?) instead of via a socket.

Adiscon repo 8.31.0-4 stable package (with unmodified Adiscon RPM config) to 8.32.0-1 package upgrade

After installing the 8.31.0-4 package (the last one), systemctl disable rsyslog; systemctl enable rsyslog and that workaround seemed to allow that version to function as expected (restart, start, stop). A now performed upgrade to the new package and rebooted. Prior to that, attempting to run systemctl status rsyslogwarned me that I should run systemctl daemon-reload (or restart) to sort things out.

After a restart, all stock settings appeared to function normally. The upgrade (yum install rsyslog) pulled in the needed libfastjson package version without my explicitly specifying to install that package. The /etc/rsyslog.conf file included in the previous stable version was replaced, but this was to be expected because I did not modify the previous conf file (thus the checksums match).

Adiscon repo 8.31.0-4 with custom config to 8.32.0-1 package upgrade

In short, the symlink from /etc/systemd/system/syslog.service to /usr/lib/systemd/system/rsyslog.service wasn’t created and systemd doesn’t create the /run/systemd/journal/syslog socket for rsyslog to read from. In a setup where imuxsock is used, not imjournal this means that rsyslog was not able to read from the socket. To restore this functionality, you have to create a drop-in to restore the socket activation.

Once you did that and either rebooted or ran systemctl daemon-reload, the /run/systemd/journal/syslogsocket was restored.

Addendum

Unmodified configurations should continue to work as before, so there is that.

Users of rsyslog who are using the Adiscon RPMs for a while now, may notice a change in the available module packages because the modules are now incorporated in the basic rsyslog package as in the RPM from the base repo. The affected module packages are (now no longer needed):

rsyslog-mmanon
rsyslog-mmutf8fix
rsyslog-mail
rsyslog-pmaixforwardedfrom

rsyslog error reporting improved

Rsyslog provides many up-to-the point error messages for config file and operational problems. These immensly helps when troubleshooting issues. Unfortunately, many users never see them. The prime reason is that most distros do never log syslog.* messages and so they are just throw away and invisible to the user. While we have been trying to make distros change their defaults, this has not been very successful. The result is a lot of user frustration and fruitless support work for the community — many things can very simple be resolved if only the error message is seen and acted on.

We have now changed our approach to this. Starting with v8.21, rsyslog now by default logs its messages via the syslog API instead of processing them internally. This is a big plus especially on systems running systemd journal: messages from rsyslogd will now show up when giving

$ systemctl status rsyslog.service

This is the place where nowadays error messages are expected and this is definitely a place where the typical administrator will see them. So while this change causes the need for some config adjustment on few exotic installations (more below), we expect this to be something that will generally improve the rsyslog user experience.

Along the same lines, we will also work on some better error reporting especially for TLS and queue-related issues, which turn out high in rsyslog suport discussions.

Some fine details on the change of behaviour:

Note: you can usually skip reading the rest of this post if you run only a single instance of rsyslog and do so with more or less default configuration.

The new behaviour was actually available for longer, It needed to be explicitly turned on in rsyslog.conf via

global(processInternalMessages="off")

Of course, distros didn’t do that by default. Also, it required rsyslog to be build with liblogging-stdlog, what many distros do not do. While our intent when we introduced this capability was to provide the better error logging we now have, it simply did not turn out in practice. The original approach was that it was less intrusive. The new method uses the native syslog() API if liblogging-stdlog is not available, so the setting always works (we even consider moving away from liblogging-stdlog, as we see this wasn’t really adopted). In essence, we have primarily changed the default setting for the “processInternalMessages” parameter. This means that by default, internal messages are no longer logged via the internal bridge to rsyslog but via the syslog() API call [either directly or
via liblogging). For the typical single-rsyslogd-instance installation this is mostly unnoticable (except for some additional latency). If multiple instances are run, only the “main” (the one processing system log messages) will see all messages. To return to the old behaviour, do either of those two:

  1. add in rsyslog.conf:
    global(processInternalMessages="on")
  2. export the environment variable RSYSLOG_DFLT_LOG_INTERNAL=1
    This will set a new default – the value can still be overwritten via rsyslog.conf (method 1). Note that the environment variable must be set in your startup script (which one is depending on your init system or systemd configuration).

Note that in most cases even in multiple-instance-setups rsyslog error messages were thrown away. So even in this case the behaviour is superior to the previous state – at least errors are now properly being recorded. This also means that even in multiple-instance-setups it often makes sense to keep the new default!

Using rsyslog to Reindex/Migrate Elasticsearch data

Original post: Scalable and Flexible Elasticsearch Reindexing via rsyslog by @Sematext

This recipe is useful in a two scenarios:

  • migrating data from one Elasticsearch cluster to another (e.g. when you’re upgrading from Elasticsearch 1.x to 2.x or later)
  • reindexing data from one index to another in a cluster pre 2.3. For clusters on version 2.3 or later, you can use the Reindex API

Back to the recipe, we used an external application to scroll through Elasticsearch documents in the source cluster and push them to rsyslog via TCP. Then we used rsyslog’s Elasticsearch output to push logs to the destination cluster. The overall flow would be:

rsyslog to Elasticsearch reindex flow

This is an easy way to extend rsyslog, using whichever language you’re comfortable with, to support more inputs. Here, we piggyback on the TCP input. You can do a similar job with filters/parsers – you can find GeoIP implementations, for example – by piggybacking the mmexternal module, which uses stdout&stdin for communication. The same is possible for outputs, normally added via the omprog module: we did this to add a Solr output and one for SPM custom metrics.

The custom script in question doesn’t have to be multi-threaded, you can simply spin up more of them, scrolling different indices. In this particular case, using two scripts gave us slightly better throughput, saturating the network:

rsyslog to Elasticsearch reindex flow multiple scripts

Writing the custom script

Before starting to write the script, one needs to know how the messages sent to rsyslog would look like. To be able to index data, rsyslog will need an index name, a type name and optionally an ID. In this particular case, we were dealing with logs, so the ID wasn’t necessary.

With this in mind, I see a number of ways of sending data to rsyslog:

  • one big JSON per line. One can use mmnormalize to parse that JSON, which then allows rsyslog do use values from within it as index name, type name, and so on
  • for each line, begin with the bits of “extra data” (like index and type names) then put the JSON document that you want to reindex. Again, you can use mmnormalize to parse, but this time you can simply trust that the last thing is a JSON and send it to Elasticsearch directly, without the need to parse it
  • if you only need to pass two variables (index and type name, in this case), you can piggyback on the vague spec of RFC3164 syslog and send something like
    destination_index document_type:{"original": "document"}
    

This last option will parse the provided index name in the hostname variable, the type in syslogtag and the original document in msg. A bit hacky, I know, but quite convenient (makes the rsyslog configuration straightforward) and very fast, since we know the RFC3164 parser is very quick and it runs on all messages anyway. No need for mmnormalize, unless you want to change the document in-flight with rsyslog.

Below you can find the Python code that can scan through existing documents in an index (or index pattern, like logstash_2016.05.*) and push them to rsyslog via TCP. You’ll need the Python Elasticsearch client (pip install elasticsearch) and you’d run it like this:

python elasticsearch_to_rsyslog.py source_index destination_index

The script being:

from elasticsearch import Elasticsearch
import json, socket, sys

source_cluster = ['server1', 'server2']
rsyslog_address = '127.0.0.1'
rsyslog_port = 5514

es = Elasticsearch(source_cluster,
      retry_on_timeout=True,
      max_retries=10)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((rsyslog_address, rsyslog_port))


result = es.search(index=sys.argv[1], scroll='1m', search_type='scan', size=500)

while True:
  res = es.scroll(scroll_id=result['_scroll_id'], scroll='1m')
  for hit in result['hits']['hits']:
    s.send(sys.argv[2] + ' ' + hit["_type"] + ':' + json.dumps(hit["_source"])+'\n')
  if not result['hits']['hits']:
    break

s.close()

If you need to modify messages, you can parse them in rsyslog via mmjsonparse and then add/remove fields though rsyslog’s scripting language. Though I couldn’t find a nice way to change field names – for example to remove the dots that are forbidden since Elasticsearch 2.0 – so I did that in the Python script:

def de_dot(my_dict):
  for key, value in my_dict.iteritems():
    if '.' in key:
      my_dict[key.replace('.','_')] = my_dict.pop(key)
    if type(value) is dict:
      my_dict[key] = de_dot(my_dict.pop(key))
  return my_dict

And then the “send” line becomes:

s.send(sys.argv[2] + ' ' + hit["_type"] + ':' + json.dumps(de_dot(hit["_source"]))+'\n')

Configuring rsyslog

The first step here is to make sure you have the lastest rsyslog, though the config below works with versions all the way back to 7.x (which can be found in most Linux distributions). You just need to make sure the rsyslog-elasticsearch package is installed, because we need the Elasticsearch output module.

# messages bigger than this are truncated
$maxMessageSize 10000000  # ~10MB

# load the TCP input and the ES output modules
module(load="imtcp")
module(load="omelasticsearch")

main_queue(
  # buffer up to 1M messages in memory
  queue.size="1000000"
  # these threads process messages and send them to Elasticsearch
  queue.workerThreads="4"
  # rsyslog processes messages in batches to avoid queue contention
  # this will also be the Elasticsearch bulk size
  queue.dequeueBatchSize="4000"
)

# we use templates to specify how the data sent to Elasticsearch looks like
template(name="document" type="list"){
  # the "msg" variable contains the document
  property(name="msg")
}
template(name="index" type="list"){
  # "hostname" has the index name
  property(name="hostname")
}
template(name="type" type="list"){
  # "syslogtag" has the type name
  property(name="syslogtag")
}

# start the TCP listener on the port we pointed the Python script to
input(type="imtcp" port="5514")

# sending data to Elasticsearch, using the templates defined earlier
action(type="omelasticsearch"
  template="document"
  dynSearchIndex="on" searchIndex="index"
  dynSearchType="on" searchType="type"
  server="localhost"  # destination Elasticsearch host
  serverport="9200"   # and port
  bulkmode="on"  # use the bulk API
  action.resumeretrycount="-1"  # retry indefinitely if Elasticsearch is unreachable
)

This configuration doesn’t have to disturb your local syslog (i.e. by replacing /etc/rsyslog.conf). You can put it someplace else and run a different rsyslog process:

rsyslogd -i /var/run/rsyslog_reindexer.pid -f /home/me/rsyslog_reindexer.conf

And that’s it! With rsyslog started, you can start the Python script(s) and do the reindexing.

DOWNLOAD OTHER

Apart from installing via tarball or git, RSYSLOG is also available in package form on some distributions. We are currently trying to support a broad range of distributions by using SUSE Open Build Service. We would also appreciate collaborators for this effort.

Almost All Distributions

Rsyslog’s repository on OBS

https://software.opensuse.org//download.html?project=home%3Argerhards&package=rsyslog

The OBS repository is a community-driven resource. It provides packages for many distributions. We plan to have it be to only one in the long term. This small video gives a brief overlook over how to use the OBS repo:

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

All packages currently maintained by Adiscon are listed below. More detailed information about the packages and how to install rsyslog with them can be found at the specific package page. Right now Adiscon provides packages for:

 

v8:

8.2506.02025-06-10
Sha256: 6d6fd0257c95e756765d4d585a833d54dd3a0e5eeb8308b862a81b368a74bb7b
File size: 3.484 MB
8.2504.02025-04-29
Sha256: 5092a20ed40987c74cc604ebfcd6c749e47eb9fc34adc1c2637e6553e7f047ab
File size: 3.468 MB
8.2502.02025-02-18
Sha256: 02fa197d21d519f5a25a928deb9397cd387ba7382b9b449782ba31d8f3118206
File size: 3.470 MB
8.2412.02024-12-03
Sha256: 8cdfa5a077cba576bdd6b1841cc2848b774e663b2e44a39512bb820174174802
File size: 3.462 MB
8.2410.02024-10-22
Sha256: b6be03c766df4cde314972c1c01cb74f3eacf8aec57066c0c12be0e079726eba
File size: 3.438 MB
8.2408.02024-08-20
Sha256: 8bb2f15f9bf9bb7e635182e3d3e370bfc39d08bf35a367dce9714e186f787206
File size: 3.436 MB
8.2406.02024-07-03
Sha256: 1343e0269dd32166ffde04d7ceebfa0e7146cf1dbc6962c56bf428c61f01a7df
File size: 3.412 MB
8.2404.02024-04-02
Sha256: 30528d140ec1b1f079224081fa37df6e06587ff42b02e3e61f2daa0526c54d33
File size: 3.408 MB
8.2402.02024-02-27
Sha256: acbdd8579489df36b4a383dc6909a61b7623807f0aff54c062115f2de7ea85ba
File size: 3.404 MB
8.2312.02023-12-12
Sha256: 774032006128a896437f5913e132aa27dbfb937cd8847e449522d5a12d63d03e
File size: 3.358 MB
8.2310.02023-10-10
Sha256: 20d9ce792bf0a7ed0703dbf0941490f8be655f48b55b4bebdc0827bbb0ddbf11
File size: 3.349 MB
8.2308.02023-08-15
Sha256: 02086b9121e872cea69e5d0f6c8e2d8ebff33234b3cad5503665378d3af2e3c9
File size: 3.346 MB
8.2306.02023-06-20
Sha256: f6283efaadc609540a56e6bec88a362c966e77f29fe48e6b734bd6c1123e0be5
File size: 3.293 MB
8.2304.02023-04-18
Sha256: d090e90283eb4b80de8b43e5ffc6e4b59c4e3970f2aa91e63beef0a11720d74d
File size: 3.274 MB
8.2302.02023-02-21
Sha256: 25415f85b662615ce3c83077d53758029e8743cb5929044bfd3564e3d626a3b9
File size: 3.273 MB
8.2212.02022-12-06
Sha256: 53b59a872e3dc7384cdc149abe9744916776f7057d905f3df6722d2eb1b04f35
File size: 3.268 MB
8.2210.02022-10-18
Sha256: 643ee279139d694a07c9ff3ff10dc5213bdf874983d27d373525e95e05fa094d
File size: 3.266 MB
8.2208.02022-08-09
Sha256: 14de68e7b8e5ab0c5d734f82e2dc9fff22cd7f4710ad690727eb10a7b9b3df5e
File size: 3.262 MB
8.2206.02022-06-14
Sha256: a1377218b26c0767a7a3f67d166d5338af7c24b455d35ec99974e18e6845ba27
File size: 3.246 MB
8.2204.12022-05-05
Sha256: a6d731e46ad3d64f6ad4b19bbf1bf56ca4760a44a24bb96823189dc2e71f7028
File size: 3.243 MB
8.2204.02022-04-19
Sha256: 7eb52db775f87f6975b70a5fbff982507c68ba3306ae05ff967443258442245d
File size: 3.242 MB
8.2202.02022-02-15
Sha256: e41308a5a171939b3cbc246e9d4bd30be44e801521e04cd95d051fa3867d6738
File size: 3.234 MB
8.2112.02021-12-16
Sha256: 6a2a973861e9266db37bd2b7b9f672b6b970bfcd743a397b8eee6b0dc4852c41
File size: 3.230 MB
8.2110.02021-10-19
Sha256: 3f904ec137ca6412e8273f7896d962ecb589f7d0c589bdf16b1709ec27e24f31
File size: 3.217 MB
8.2108.02021-08-17
Sha256: 4826c2b6d081a9c95f469fb0115be3f9512065297d3de00ec513758cdb30b1d9
File size: 3.024 MB
8.2106.02021-06-15
Sha256: faf45c25a2265c001739e8888b3652cf685eb3f35cd65d17d5c38fd44b9ddd81
File size: 3.180 MB
8.2104.02021-04-20
Sha256: 710981c3c34f88d5d1fb55ecfc042aecad8af69414b2b1602b304f4dedbf9f43
File size: 3.175 MB
8.2102.02021-02-16
Sha256: 94ee0d0312c2edea737665594cbe4a9475e4e3b593e12b5b8ae3a743ac9c72a7
File size: 3.123 MB
8.2012.02020-12-08
Sha256: d74cf571e6bcdf8a4c19974afd5e78a05356191390c2f80605a9004d1c587a0e
File size: 3.118 MB
8.2010.02020-10-20
Sha256: 19b232f765c4ba7a35b91ef1f5f9af775f6ff78ef56bb7737a2ce79ccbb32b98
File size: 3.097 MB
8.2008.02020-08-25
Sha256: 09d2b6c8bc2f016598aed2bb719e03f822bb01d720c61e4d6e725e00dca1b650
File size: 3.051 MB
8.2006.02020-06-23
Sha256: d9589e64866f2fdc5636af4cae9d60ebf1e3257bb84b81ee953ede6a05878e97
File size: 3.047 MB
8.2004.02020-04-28
Sha256: 5fc3d7b775f0879a40606d960491812a602e22f62e006ce027ed7bcf4c9f27d9
File size: 3.002 MB
8.2002.02020-02-25
Sha256: fe86c14d860da1202c76616feac0539ea5a40a4ad182d74b7d6d2419cc2381f8
File size: 3.001 MB
8.2001.02020-01-14
Sha256: 58bf06f58cd4a4d796bc5aea65fffc18c25619285adaa90d89d4cea5921ea8da
File size: 2.999 MB
8.1911.02019-11-12
Sha256: e1f4776b1c62ad7220f4d624a89a96b0c3d4738006899356eaaef0f1f91ee104
File size: 2.967 MB
8.1910.02019-10-01
Sha256: 0219ee692f31a39743acb62aaf4196b644ce94edf386df4605fd6a11a4fe0c93
File size: 2.957 MB
8.1908.02019-08-20
Sha256: f8c8e53b651e03a011667c60bd2d4dba7a7cb6ec04b247c8ea8514115527863b
File size: 2.952 MB
8.1907.02019-07-09
Sha256: eb27535ece93174ef6b551c88467d2c9cd826b62479625bb881a53d50b079fb5
File size: 2.926 MB
8.1905.02019-05-28
Sha256: 96bd4fab8d768fd6ad22d45e10b83e159b93df9bafcde1d582e1224f647116e4
File size: 2.911 MB
8.1904.02019-04-16
Sha256: 7098b459dfc3f8bfc35d5b114c56e7945614ba76efa4e513b1db9c38b0ff9c3d
File size: 2.903 MB
8.1903.02019-03-05
Sha256: d0d23a493dcec64c7b6807a1bb8ee864ed0f3760c2ff3088008bb661d304056f
File size: 2.786 MB
8.1901.02019-01-22
Sha256: ab02c1f11e21b54cfaa68797f083d6f73d9d72ce7a1c04037fbe0d4cee6f27c4
File size: 2.750 MB
8.40.02018-12-11
Sha256: 414abbdd27b65d3cd513e1a8a7ccbd110d06160759189e818ea93aca962fc194
File size: 2.726 MB
8.39.02018-10-30
Sha256: c71f96fed6538de397df25da602384f6ee2cb67329d9f3362af2a18508616ab4
File size: 2.721 MB
8.38.02018-09-18
Sha256: 4d328ed3bcae784e15401c6c20ada2a9be380798ff6bf0da3fe2095915bba22c
File size: 2.721 MB
8.37.02018-08-07
Sha256: 295c289b4c8abd8f8f3fe35a83249b739cedabe82721702b910255f9faf147e7
File size: 2.697 MB
8.36.02018-06-26
Sha256: 8a4b5beb92c6b308c3d14de2364c2788f62ef5d37ca0448144619edfe508ee70
File size: 2.639 MB
8.35.02018-05-15
Sha256: d216a7f7c88341d5964657e61a33193c13d884c988822fced9fce3ab0b1f1082
File size: 2.590 MB
8.34.02018-04-03
Sha256: 18330a9764c55d2501b847aad267292bd96c2b12fa5c3b92909bd8d4563c80a9
File size: 2.545 MB
8.33.12018-03-06
Sha256: 2da2bd773dbd5fde4eb162d5411dac96bf596b33e62a4062610443686597e3a8
File size: 2.494 MB
8.32.02018-01-09
Sha256: 9646fdc33a6314464cba68323716010a8a55c3deb523cd798ba8b41a0efa40b8
File size: 2.478 MB
8.31.02017-11-28
Sha256: eee6318f8127f56500c1e1f672fac6207eeb87bbf0985f5af964f881a96601b2
File size: 2.498 MB
8.30.02017-10-17
Sha256: dfb9c3efe52ad03ad9f4479699139fb447177049553b6993315f53b668a2251f
File size: 2.468 MB
8.29.02017-08-08
Sha256: 220ba30b5afb0f3ddb328613fea7aa3966b01e4d0c52d6de9ab27b0858f19738
File size: 2.447 MB
8.28.02017-06-27
Sha256: 4ca5405908d612d45da700e36856430510875518eb8028d296d1ee4d2c44678e
File size: 2.435 MB
8.27.02017-05-16
Sha256: 02aefbba59324a6d8b70036a67686bed5f0c7be4ced62c039af6ee694cb5b1fd
File size: 2.435 MB
8.26.02017-04-04
Sha256: 637d43c4384f8b96dda873a0b8384045f72cb43139808dadd9e0a94dccf25916
File size: 2.393 MB
8.25.02017-02-21
Sha256: c756f16a083e5d4081fb9bfb236303a839cdca0a2c00017bd770b2e2e9677427
File size: 2.386 MB
8.24.02017-01-10
Sha256: 37f32ce33e32a88e1bea0511e8e557d90b7378b81520e3236a9af5ba6ea993d7
File size: 2.374 MB
8.23.02016-11-15
Sha256: 244e79552d37a5729f3f53786062828adc16fd080eeb0de6507bff55ed21693b
File size: 2.284 MB
8.22.02016-10-05
Sha256: 06e2884181333dccecceaca82827ae24ca7a258b4fbf7b1e07a80d4caae640ca
File size: 2.207 MB
8.21.02016-08-23
Sha256: bdb1fde87b75107b58d1cd5d00408822fb15b9f3efb8d9dbb93a1dee128339ab
File size: 2.23 MB
8.20.02016-07-12
Sha256: 339c8f848238459318bf742d1c7a48854f98418fd3a7909030b614c395165b17
File size: 2.23 MB
8.19.02016-05-31
Sha256: 3379b30f2e6ef05a0f1c7327c73923fa5877a80b984506cdf48ed068c94a575
File size: 2.23 MB
8.18.02016-04-19
Sha256: 94346237ecfa22c9f78cebc3f18d59056f5d9846eb906c75beaa7e486f02c695
File size: 2.21 MB
8.17.02016-03-08
Sha256: ec1e19b5964cf88a9c0508d438248244b71ca35967fe40b842938f4ce9ba5fb9
File size: 2.08 MB
8.16.02016-01-26
Sha256: 4fe4f97c10899086d98b9401d7e8d2bcff61c7c3f7cde8627891e36fc6ec1b76
File size: 2.08 MB
8.15.02015-12-16
Sha256: 9ed6615a8503964290471e98ed363f3975b964a34c2d4610fb815a432aadaf59
File size: 2.06 MB
8.14.02015-11-05
Sha256: 443b5b1d2b84f5cd429d06d230af7fb2352336fa6449cb6484dbd4418a7ae7c2
File size: 2.08 MB
8.13.02015-09-22
Sha256: b182bd0a7686bef093be570bfb850417191292522fb58e0ad32f2c824f754a33
File size: 2.08 MB
8.12.02015-08-11
Sha256: 466bfeac8296e89de1eb9029880998ba7b5fc25694143197bb47167df6cb7e20
File size: 2.04 MB
8.11.02015-06-30
Sha256: bc64d8ba1e3fb8cfe21eadd5fb0938381bb37ed72cef9d6f14d376d2bac9bf78
File size: 2.01 MB
8.10.02015-05-19
Sha256: b92df3f367108219e2fffccd463bf49d75cb8ab3ceaa52e9789f85eace066912
File size: 2.01 MB
8.9.02015-04-07
Sha256: eab00e8e758cd9dd33b3e2cf6af80297d1951dc7db37bd723a6488a35d577adc
File size: 2.02 MB
8.8.02015-02-24
Sha256: 147a7e474665af7a817ac18d7924e26448350a77572e7fd9cfe284cb6291a0eb
File size: 2.00 MB
8.7.02015-01-13
Sha256: c77125b67a623569c9bdca8136b9aac013f1c6fd82fb8595e3ea267e61800f9c
File size: 2.00 MB
8.6.02014-12-02
Sha256: 759f836be460c794a7649f2b5b5ef8d423388ec599bf3b49f51fded3f8c02431
File size: 1.97 MB
8.5.02014-10-24
Sha256: 0d20144be8a5d107a172418b1a39cdd48d7ef921b94e7ea45c58b12bce8caa52
File size: 1.98 MB
8.4.22014-10-02
Sha256: 71c3c6dac74fba2692f9fefb092cd3d22e2bd71eb702

Connecting with Logstash via Apache Kafka

Original post: Recipe: rsyslog + Kafka + Logstash by @Sematext

This recipe is similar to the previous rsyslog + Redis + Logstash one, except that we’ll use Kafka as a central buffer and connecting point instead of Redis. You’ll have more of the same advantages:

  • rsyslog is light and crazy-fast, including when you want it to tail files and parse unstructured data (see the Apache logs + rsyslog + Elasticsearch recipe)
  • Kafka is awesome at buffering things
  • Logstash can transform your logs and connect them to N destinations with unmatched ease

There are a couple of differences to the Redis recipe, though:

  • rsyslog already has Kafka output packages, so it’s easier to set up
  • Kafka has a different set of features than Redis (trying to avoid flame wars here) when it comes to queues and scaling

As with the other recipes, I’ll show you how to install and configure the needed components. The end result would be that local syslog (and tailed files, if you want to tail them) will end up in Elasticsearch, or a logging SaaS like Logsene (which exposes the Elasticsearch API for both indexing and searching). Of course you can choose to change your rsyslog configuration to parse logs as well (as we’ve shown before), and change Logstash to do other things (like adding GeoIP info).

Getting the ingredients

First of all, you’ll probably need to update rsyslog. Most distros come with ancient versions and don’t have the plugins you need. From the official packages you can install:

If you don’t have Kafka already, you can set it up by downloading the binary tar. And then you can follow the quickstart guide. Basically you’ll have to start Zookeeper first (assuming you don’t have one already that you’d want to re-use):

bin/zookeeper-server-start.sh config/zookeeper.properties

And then start Kafka itself and create a simple 1-partition topic that we’ll use for pushing logs from rsyslog to Logstash. Let’s call it rsyslog_logstash:

bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic rsyslog_logstash

Finally, you’ll have Logstash. At the time of writing this, we have a beta of 2.0, which comes with lots of improvements (including huge performance gains of the GeoIP filter I touched on earlier). After downloading and unpacking, you can start it via:

bin/logstash -f logstash.conf

Though you also have packages, in which case you’d put the configuration file in /etc/logstash/conf.d/ and start it with the init script.

Configuring rsyslog

With rsyslog, you’d need to load the needed modules first:

module(load="imuxsock")  # will listen to your local syslog
module(load="imfile")    # if you want to tail files
module(load="omkafka")   # lets you send to Kafka

If you want to tail files, you’d have to add definitions for each group of files like this:

input(type="imfile"
  File="/opt/logs/example*.log"
  Tag="examplelogs"
)

Then you’d need a template that will build JSON documents out of your logs. You would publish these JSON’s to Kafka and consume them with Logstash. Here’s one that works well for plain syslog and tailed files that aren’t parsed via mmnormalize:

template(name="json_lines" type="list" option.json="on") {
  constant(value="{")
  constant(value="\"timestamp\":\"")
  property(name="timereported" dateFormat="rfc3339")
  constant(value="\",\"message\":\"")
  property(name="msg")
  constant(value="\",\"host\":\"")
  property(name="hostname")
  constant(value="\",\"severity\":\"")
  property(name="syslogseverity-text")
  constant(value="\",\"facility\":\"")
  property(name="syslogfacility-text")
  constant(value="\",\"syslog-tag\":\"")
  property(name="syslogtag")
  constant(value="\"}")
}

By default, rsyslog has a memory queue of 10K messages and has a single thread that works with batches of up to 16 messages (you can find all queue parameters here). You may want to change:
– the batch size, which also controls the maximum number of messages to be sent to Kafka at once
– the number of threads, which would parallelize sending to Kafka as well
– the size of the queue and its nature: in-memory(default), disk or disk-assisted

In a rsyslog->Kafka->Logstash setup I assume you want to keep rsyslog light, so these numbers would be small, like:

main_queue(
  queue.workerthreads="1"      # threads to work on the queue
  queue.dequeueBatchSize="100" # max number of messages to process at once
  queue.size="10000"           # max queue size
)

Finally, to publish to Kafka you’d mainly specify the brokers to connect to (in this example we have one listening to localhost:9092) and the name of the topic we just created:

action(
  broker=["localhost:9092"]
  type="omkafka"
  topic="rsyslog_logstash"
  template="json"
)

Assuming Kafka is started, rsyslog will keep pushing to it.

Configuring Logstash

This is the part where we pick the JSON logs (as defined in the earlier template) and forward them to the preferred destinations. First, we have the input, which will use to the Kafka topic we created. To connect, we’ll point Logstash to Zookeeper, and it will fetch all the info about Kafka from there:

input {
  kafka {
    zk_connect => "localhost:2181"
    topic_id => "rsyslog_logstash"
  }
}

At this point, you may want to use various filters to change your logs before pushing to Logsene/Elasticsearch. For this last step, you’d use the Elasticsearch output:

output {
  elasticsearch {
    hosts => "localhost" # it used to be "host" pre-2.0
    port => 9200
    #ssl => "true"
    #protocol => "http" # removed in 2.0
  }
}

And that’s it! Now you can use Kibana (or, in the case of Logsene, either Kibana or Logsene’s own UI) to search your logs!

Scroll to top