The rsyslog 2025 Year in Review

Evolving Proven Infrastructure for a New Era

The year 2025 was a defining year for rsyslog. Not because of a single feature or release, but because several long-running threads finally converged: AI-assisted workflows, even fuller multi-core scalability, and native integration with modern observability stacks.

rsyslog 2025 Review: AI First (Human-Controlled), Core Engineering scaling, and Cloud-Native Data Flows.

Rather than chasing trends, rsyslog focused on evolving what it already does best: reliable, high-performance log and data processing for real-world infrastructure.

At the same time, the project continued a shift that has been underway for years. For quite some time now, rsyslog has been more than a syslog daemon. It is increasingly used as a flexible, programmable data and information pipeline that happens to excel at logs.

Three themes shaped the year.

AI First: Using AI Where It Actually Helps

In 2025, rsyslog adopted an “AI First (human-controlled)” strategy. Not as a marketing label, and not as a replacement for engineering judgment, but as a pragmatic response to scale: scale of code, scale of documentation, and scale of operational knowledge.

From parameters to intent

Over time, rsyslog documentation had grown large, fragmented, and difficult to navigate. AI was introduced as a documentation and navigation layer, grounded strictly in authoritative project content.

Instead of hunting for individual parameters, users can now express intent and receive correct, syntactically valid RainerScript configurations, always grounded in existing rsyslog capabilities and documentation. This supports modern workflows where rsyslog is part of larger ingestion, enrichment, and routing pipelines rather than a standalone logger.

AI with guardrails

AI is used as a force multiplier for human expertise, not as an autonomous decision-maker. At the same time, rsyslog formalized how AI may participate in development.

This includes the supervised use of code agents for selected development tasks. All AI-assisted code changes go through the same review, testing, and continuous integration pipelines as human-written code. Existing safeguards such as code review, automated tests, static analysis, and release gating remain fully in force.

With the introduction of AGENTS.md, the project defined clear boundaries:

  • AI contributions must be identifiable
  • Human review is mandatory
  • Traceability is required

This ensures that AI strengthens the project without weakening trust, auditability, or long-term maintainability.

Core Engineering: Scaling Data Ingest on Modern Hardware

While AI drew attention, some of the most impactful work in 2025 happened deep in the C core. These changes matter wherever rsyslog is used as a high-volume ingest layer for logs, metrics, or structured event data.

Multi-threaded TCP ingest

Historically, TCP ingestion relied on a single execution path. That design was efficient for its time, but modern servers with many cores and high-speed networks exposed its limits, even at the ingestion point.

In 2025, TCP ingestion was fundamentally redesigned to distribute workload across multiple worker threads. This allows rsyslog to fully utilize modern hardware while preserving fairness and predictable behavior under load.

New internal statistics make this behavior observable and tunable, turning performance from a guess into a measurable system property.

A stronger ingestion foundation

These improvements were supported by a deeper refactor of the shared TCP server code. Explicit state handling and clearer lifecycle management eliminated long-standing edge cases that only surfaced under extreme concurrency.

The result is a faster, more predictable ingestion layer suitable for use at the front of modern data pipelines.

Bridging Worlds: From Syslog to Cloud-Native Data Flows

In 2025, rsyslog continued its evolution from a traditional syslog processor into a bridge between legacy systems and cloud-native data platforms.

Native OpenTelemetry output

With the introduction of a native OpenTelemetry output module, rsyslog can now transform and forward data directly into OTLP-based backends.

This allows rsyslog to act as a high-performance ETL component: ingesting raw events, normalizing and enriching them, batching efficiently, and delivering structured data downstream with strong delivery guarantees.

Container environments without sidecars

Native support for Linux network namespaces enables a different deployment model in containerized environments. A single rsyslog instance can now access isolated namespaces directly, avoiding the overhead and complexity of per-pod sidecars.

This makes rsyslog well suited as a node-level data collection and processing layer in both virtualized and container-based environments.

Observability-friendly metrics

Internal statistics can now be exposed directly in formats commonly consumed by modern monitoring systems, simplifying integration into broader observability stacks.

Security and Windows: Cleaner Data at the Edge

Enterprise environments remain heterogeneous. In 2025, rsyslog continued to push intelligence closer to the edge, especially in Windows-heavy deployments.

  • Improved parsing for the still widely used legacy Windows event log SNARE format allows structured, high-quality data to be produced early in the pipeline. It even solves a problem that many considered too hard to solve.
  • TLS handling was hardened and made more transparent, with clearer diagnostics when things go wrong.

These changes reduce downstream load and improve data quality before events ever reach analytics or storage systems.

Looking Ahead

What defined rsyslog in 2025 was not reinvention for its own sake, but careful evolution.

  • Proven mechanisms were extended, not replaced
  • AI was integrated deliberately, not blindly
  • Scalability improvements were structural, not cosmetic
  • Modern data platforms were embraced without abandoning legacy systems

As 2026 begins, rsyslog stands on a stronger foundation: faster, more observable, more flexible, and increasingly positioned as a core building block for modern data and information pipelines.

The work continues, and the direction is clear.

Scroll to top