Myth-Buster: rsyslog is not “just a legacy syslogd”
The myth is persistent — partly because of the name. Yes, rsyslog started life as an enhanced syslog daemon for Linux. But over two decades, it has evolved into a high-performance ETL engine that powers data pipelines in thousands of production environments.

From syslog to full ETL
rsyslog ingests data from almost anywhere — local files, system journals, network protocols (UDP, TCP, RELP, TLS), or modern message brokers like Kafka.
Once inside, its modular design turns raw logs into structured, enriched, and policy-compliant data streams.
With tools like mmnormalize and mmjsonparse, it parses unstructured text into JSON, redacts PII, enriches with GeoIP data, and applies filtering and routing logic defined in RainerScript.
Reliable at any scale
rsyslog can process millions of messages per second with disk-assisted queues, backpressure control, and RELP reliability.
Outputs range from local files to Elasticsearch, Kafka, HTTP endpoints, or any custom destination — making it a true Extract–Transform–Load (ETL) framework for event data.
Still a great syslogd
rsyslog remains fully compatible with traditional syslog — it can replace classic syslogd one-to-one, yet offer far more flexibility and observability.
This backward compatibility is why some still see it as “just a daemon.”
But in modern infrastructures, rsyslog often forms the core of observability, SIEM, and analytics pipelines — combining deterministic performance with deep configurability.
Looking ahead
Under the hood, rsyslog is now guided by an AI-First (human-controlled) vision: faster automation, smarter policy generation, and deeper integration with next-gen analytics.
So next time you hear “legacy syslogd,” remember:
rsyslog still runs your syslog — and now it runs your data pipeline too.
👉 Learn more in the FAQ: Using rsyslog as an ETL tool
