Call for Experiments: Agentic Coding on rsyslog

We are inviting developers, junior engineers, and curious “vibe coders” to experiment with AI code agents on a real production code base: rsyslog.

AI agent experimenting with rsyslog production code, showing iterative testing on a real infrastructure system with servers and pipelines

This is not a sandbox or demo repository. This is mature infrastructure software with real users, strict quality requirements, and a long maintenance history.

If you want to see how far agentic coding really goes, this is a good place to try.

The experiment workflow

The process is intentionally flexible, but grounded in real-world contribution flow:

  1. Let the AI agent pick the topic
    Ideally, have the agent inspect issues, code, tests, or recent review discussions and propose a suitable task. Small and contained topics work best. Alternatively, you may choose a topic yourself if you already have one in mind.
  2. Let the agent drive the implementation
    You choose the agent, model, and setup. The agent analyzes the problem, proposes a fix, and implements it. You supervise and guide, but do not micromanage.
  3. Create a local PR (optional, but recommended)
    A local PR often helps iteration and learning. It is not mandatory, but usually improves results.
  4. Submit the PR to the rsyslog repository
    Clearly mark it as agent-generated (see label below). From here on, the normal review process applies.
  5. Iterate through review
    Expect a couple of iterations. Sometimes the agent converges to a solid solution. Sometimes it does not. Both outcomes are valid and useful.

Important notes before you start

  • The rsyslog code base is instrumented for AI usage
    Please use a fresh checkout. We improve AI-related structure, hints, and tooling almost daily. Older checkouts may miss important context.
  • There is a real risk that no merge happens
    Some agent-driven attempts will stall or fail to reach an acceptable solution. This is expected and still a valuable learning outcome.
  • Quality expectations are real
    Agent-generated PRs are reviewed like any other contribution. Based on current experience, we believe many can meet our quality bar. Nothing is auto-accepted.

Who this is for

  • Developers curious about agentic coding in non-trivial systems.
  • Junior developers who want to raise the bar by working with real code and real reviews.
  • Vibe coders who want to see how AI-generated code holds up under production standards.
  • Anyone interested in understanding the practical limits of AI-assisted development.

You do not need to be an rsyslog expert. You do need to be willing to iterate and learn.

Labeling and tracking

To make these experiments visible and evaluable over time, we tag related issues and PRs with:

agentic-coding-experiment

This label is intentionally neutral and descriptive. It focuses on what is being tested (agentic coding) rather than on tools, hype, or outcomes.

Why we are doing this

There is a lot of discussion around AI coding agents, but little evidence from mature, long-lived code bases.

rsyslog provides real constraints, real users, and real review pressure.

If agentic coding works here, it works under real conditions. If it does not, we want to understand where the limits are. Either way, the result is useful.

Scroll to top