NewsLab
Apr 28 20:38 UTC

Show HN: Open Bias – proxy that enforces agent behavior at runtime (github.com)

6 points|by algomaniac||3 comments|Read full story on github.com

Comments (3)

3 shown
  1. 1. algomaniac||context
    Hey HN,

    We spent the past year working on evals for teams running AI agents in production. We kept seeing rules that worked fine in evals stop working after a while (or miss inconsistently). And as teams added more rules, the agent started missing more of them overall.

    Evals and observability help, but the long tail finds you in prod anyway. Guardrails are like the side rails on the highway, useful, but you don't want to be hitting them often. We wanted the lane-keeping system that steers the agent as it deviates.

    So we built an open-source proxy that helps steer agents, catching and fixing violations before they reach users. Rules live in a RULES.md file (single source of truth for all policies). The thing we care most about is that the engines doing the checking are pluggable:

    - Some checks are best as regex or deterministic code

    - Some are LLM-as-judge

    - Some are existing guardrail systems like Nvidia's NeMo

    - Some are state classifiers for workflows

    Results from all of them get combined to steer the agent (intervene, block, or shadow). No single evaluator is going to be right for every rule, and we didn't want to pretend otherwise. Still working on a bunch of things, calibrating: per-engine thresholds, voting across judges, how to aggregate signals across engines, and the classifier that routes rules to the right engine.

    Checkers for critical violations run sync and block before the response goes out. Non-critical ones run async and the correction lands on the next turn (latency was essential for teams running voice agents). We're still building the classifier for this; right now it's specified in the config.

    Our instinct is that it's easier to detect an agent's mistake than to get the agent not to make it in the first place. The main agent carries the full context (system prompt, tools, conversation history, business logic). But the checkers can take a narrow slice, run in parallel, and perform simpler computations (or answer simpler questions). Cheaper, faster, and you can stack them.

    Beta, rough in places. Would love feedback, especially from anyone running agents in prod and feeling this. Happy to go deep on the architecture, engines, whatever.

    Repo: https://github.com/open-bias/open-bias

  2. 2. JotatD2||context
    i love the steering concept! but wouldn't this 2x my token spend? cost is already the bottleneck on the agent workflows we run
  3. 3. algomaniac||context
    Thanks! In practice it can be a fraction of that. The judge sees a much smaller slice than the original call, and it's usually a smaller model making it 10-20x cheaper than the model running your agent. In some cases we've also seen folks who were using a frontier model mainly for reliability end up comfortable downgrading their agent model to a cheaper one. We'll add a proper cost breakdown in the docs!