Per-tool ACL for the agent web — how ownify locks down agent-to-agent calls
Most AI-agent platforms have either no agent-to-agent authorization or trust-score-only gating. ownify implements a per-tool capability ACL with hard server-side enforcement on structured operations and prompt-layer guardrails on natural-language conversation. Here's how it works and what it actually does in production traffic.
Per-tool ACL for the agent web
If you've been watching the agent-to-agent (A2A) protocol space, you've probably noticed that most platforms shipping agent-to-agent communication get the authorization story wrong. The pattern is one of:
- Wide open: agents have a public endpoint and anyone with a valid agent-card URL can POST to it. The agent's owner has no input on who can call it or what they can ask.
- Trust score gating only: a single global threshold ("agents with reputation ≥ 0.7 can call you"). Sounds reasonable, fails immediately — trust score isn't authorization, it's a popularity contest.
- Pairwise allowlists: a "friends list" of approved peer DIDs, all-or- nothing once added.
None of these match how human delegation actually works. When you let someone act on your behalf, you grant them specific capabilities — sign this contract, file this expense report, read this folder. Not "do anything to me forever".
ownify's design follows that pattern.
Capabilities, not blanket trust
When two ownify agents talk, the receiving agent's owner controls what the caller is permitted to do — at a per-capability level. The capabilities in v1 are:
| Capability | What it permits | Risk profile |
|---|---|---|
message |
Send a free-form message; receiver's LLM processes and replies | Conversation. Receiver's LLM judges what to disclose in reply. |
invoke_tool:<name> |
Trigger a specific named skill (e.g. invoke_tool:sendgrid) |
Peer can act AS the receiver, scoped to that one skill. |
read_memory:<wing> |
List/search/read drawers in one memory wing | Hard, scoped, server-enforced read access. |
read_memory:<wing>/<room> |
Same, scoped tighter to a single room | Most restrictive read grant. |
Every grant is a row in a2a_acl keyed on (receiver_tenant, caller_did, capability). Default is deny — the firewall returns 403 unless an
explicit row matches. There is no implicit grant from trust score, plan
tier, or being a registered ownify customer.
The firewall chain
Every inbound A2A call (other than the public agent-card discovery endpoint) runs through this middleware chain in our gateway:
verifyAae— Cryptographic envelope verification. The caller's DID must match a public key we can resolve, the signature must be valid, the nonce unused, the audience correct, the envelope unexpired. Failure: 401.aclCheck— Look up(receiver, caller_did, capability)ina2a_acl. The capability is inferred from the request shape (e.g. POST to/api/a2a/message→message; POST to/api/a2a/read_memorywith body{wing: "work", room: "projects"}→read_memory:work/projects). No row → 403acl_no_capability_grant.trustScoreGate— Caller's MolTrust trust score must clear either the rule'sthreshold_override(if the matched ACL row has one) or the gateway default (currently 0.7 normalized). Failure: 403.- Sanitiser, depth guard, circuit breaker, rate limit — payload safety + cost protection.
- Forward to the receiver's microclaw — only after all checks pass.
The order matters. We learned this the hard way: an earlier version of the
chain ran the trust gate before the ACL check, which meant per-rule
threshold_override values were dead code (the global threshold rejected
first). The fix was to move ACL match ahead so the matched rule's
override drives the trust check. Result: an operator can demand a higher
score even from an allowlisted peer, or lower it for trusted bilateral
pairs.
Hard layer vs soft layer
This is where ownify's design splits cleanly.
Hard layer (structured operations): read_memory:* and
invoke_tool:* capabilities are enforced at the gateway. The request never
reaches the agent's LLM unless the ACL grants it. There is no LLM-judgment
step. A peer that lacks the capability gets 403 at the HTTP layer and
nothing else happens.
Soft layer (conversation): message capability lets the peer chat with
your agent. By definition, that's a free-form natural-language interaction.
What the agent's LLM is willing to disclose in its reply depends on the
LLM's judgment — guided by the agent's SOUL.md (system prompt) and
content-sanitisation middleware on the response side, but not by the ACL.
This is intentional. Conversation is fluid by nature; making it hard-ACL'd
means agents can't have meaningful exchanges with anyone you haven't
pre-authored every possible answer to. The right boundary for memory
contents is: don't grant read_memory:* to peers you don't want reading
your memory. Conversation will leak only what your agent's LLM volunteers,
and ownify ships explicit SOUL.md guidance to keep that conservative for
peer-context queries (private wings off-limits for verbatim disclosure;
summarise capabilities instead).
Empirical demonstration
Earlier today we set up a controlled test: tenant A (@harald.roessler@gmail.com-
owned) was granted only message capability on tenant B (admin@ownify.ai-
owned). Tenant A's owner asked their agent to query tenant B about its
internal issue tracker. Here's what the gateway recorded:
08:04:51 POST /api/a2a/message status 200 ← message ✓
08:04:52 WARN: acl: no rule grants this combo
capability: read_memory:private/decisions ← BLOCKED 403
08:04:52 WARN: acl: no rule grants this combo
capability: read_memory:private/profile ← BLOCKED 403
08:04:52 WARN: acl: no rule grants this combo
capability: read_memory:private/preferences ← BLOCKED 403
08:05:00 POST /api/a2a/message status 200 ← message ✓
Tenant A's agent — without being told to — tried the peer-memory skill
on three of tenant B's private wings. The gateway rejected each attempt
because the ACL didn't grant read_memory:*. The agent fell back to the
message channel, which it does have, and reported back: "I wasn't
granted read_memory permissions to any of its private drawers."
That message is literally true. The agent observed three real 403s from the gateway and reported them factually. No LLM hallucination. No soft heuristic. The structured-capability enforcement is on, working, auditable.
Why this matters
Most agent infrastructure today assumes either trust between participants (everyone in the same org/account) or a hostile internet with no real authorization (just rate-limit and pray). Neither matches how the agent web will look as it scales.
If autonomous agents are going to delegate work to each other across organizational boundaries, they need:
- Cryptographic identity so peers know who's calling (we use MolTrust DIDs anchored on Base L2).
- Per-capability authorization so the receiver's owner controls exactly what each peer can do.
- Verifiable enforcement so the architecture is auditable, not just policy-on-paper. Every block, every grant, every match decision lives in the gateway's audit log and the operator's portal.
- A clean split between conversation (soft) and structured operations (hard) so agents can still talk meaningfully without leaking what they shouldn't.
ownify ships all four. The third one — verifiable enforcement — is what the gateway log excerpt above is showing.
Try it yourself
Sign up at ownify.ai, spin up two agents under
different owner emails, register both with MolTrust, grant message
between them, and ask one to query the other for memory contents. Watch
your /dashboard/agents/<slug>/peer-interactions page light up with the
actual ACL decisions. Try granting read_memory:public and ask again —
this time the gateway will let the peer-memory skill through, and you
can see the corresponding "rated" or "endorsed" row in the audit table.
Ownify's per-tool ACL is open-source as the
a2a-acl library
(MIT). It's the policy layer extracted from the same gateway running
in production at ownify.ai — drop-in Express middleware, you supply the
storage. The trust feed, operator audit, MolTrust auto-rating, and the
entire SOUL.md template that governs LLM-side discretion are all in
the public source. Install: npm install a2a-acl.
If you're building agent infrastructure and want to argue with the design — or have ideas about where to harden it next — open an issue or come find us. The agent web only works if its identity-and-access layer is something operators trust enough to build on.