Agent on Sentinel
In this topology, a sentinel is both:
- the front door to your fabric (admission + routing), and
- a perfectly valid place to host agents.
It’s the smallest step up from single-process that still gives you a “real” edge component you can keep as your system grows.
Shape

What changes vs single-process
Compared to Single-process fabric, you introduce a network boundary:
- the client connects to a sentinel over a transport,
- the sentinel performs admission (who can connect / under what identity),
- and routing now happens through the sentinel.
What doesn’t have to change: your agent logic. In most cases you can move the same agent code from a local fabric to the sentinel host without rewriting it.
Why this topology exists
This topology exists because it’s the smallest deployable shape that still gives you an explicit edge/control point.
It lets you:
- keep the deployment tiny (often one sentinel + one agent),
- get routing + admission/policy from day one,
- avoid introducing dedicated worker nodes until you have a real need,
- and preserve a clean upgrade path: you can move agents off the sentinel later without changing how clients talk to the fabric.
When to use
Use “agent on sentinel” when you want:
- the smallest deployment that still has a real edge front door,
- admission/routing policy available from day one,
- a single place to configure security controls,
- to keep infrastructure minimal while you validate agent behavior.
It’s a great default for early projects: one client node talking to one sentinel that hosts your first agents.
When not to use
Graduate from “agent on sentinel” when:
- agents are CPU/memory heavy and you don’t want them competing with the control plane,
- you need stronger isolation (crashy/experimental agents),
- you want to scale workers independently,
- you want mixed runtimes (Python agents + TypeScript clients, etc.) and a cleaner separation.
Hosting agents on the sentinel is not a hack — it’s a valid topology. But as soon as “control plane” and “work” start competing, you’ll want dedicated agent nodes.
Live demo
- Hello (agent on sentinel): https://examples-ts-hello.naylence.io
This demo is intentionally boring in behavior; the goal is to show that the sentinel can host an agent while still acting as the routing/admission boundary.
Notes
- Clients still aren’t second-class. The client node can listen, handle streaming, and host local logic; it just happens to initiate the interaction in this topology.
- You can host multiple agents on the sentinel. This is often fine early on.
- You can move agents off later. This topology is designed to evolve into a dedicated sentinel topology without changing how clients address agents.
Next step
When you’re ready to separate control and work, move to:
That keeps the same “clients talk to a sentinel” model, while letting your agents run on separate nodes.