2 points · Cohesix · 7 hours ago
github.comI built Cohesix 0.4.0-alpha to treat compliance and security as a systems problem. It is a control-plane OS for edge GPU nodes, running as an seL4 VM, and it exposes a Secure9P namespace instead of a traditional filesystem or RPC layer.
The heresy is deliberate. The VM userspace is no_std, no POSIX, no traditional filesystem, no in‑VM RPC, and no background daemons. The interface is a small, explicit grammar: file‑shaped control surfaces under a Secure9P namespace.
This is not minimalism for its own sake. It is about determinism, auditability, revocation, bounded behavior, and making failure modes legible. Tickets and leases expire; budgets return ELIMIT instead of mystery latency; /proc exposes queue and lease state.
A deployment is a hive: a queen role orchestrates worker‑heart and worker‑gpu roles, and NineDoor exports paths like /queen/ctl, /proc, /log, and /worker/<id>/telemetry. Operators attach with ‘cohsh’ over an authenticated TCP console; that console is the only in‑VM listener.
Cohesix does not try to replace Linux, Kubernetes, CUDA, or existing OSS. Heavy ecosystems stay on the host, and host‑side tools and sidecars mirror them into /gpu and /host, so adoption can happen without rewrites. It is a control‑plane boundary, not a workload plane.
In 0.4.0‑alpha I added authoritative scheduling/lease/export/policy control files with /proc observability, plus a REST gateway that projects the same file semantics over HTTP. QEMU aarch64/virt is the dev target today; UEFI ARM64 is the intended hardware target.
If you want a general‑purpose OS, this is the wrong tool. I wanted something boring on purpose, small but deceptively powerful, and I was willing to sacrifice convenience to regain control.
CohesixOP
[deleted]
Cohesix isn’t trying to be a better Linux, a lighter Kubernetes, or a new ML runtime. It’s intentionally not a workload OS.
The problem it’s aimed at is the authority boundary: where control, policy, leases, and revocation live once you already have large, fast-moving OSS stacks on the host. That’s why the VM is aggressively constrained (no_std, no POSIX, no daemons) and why everything reduces to file-shaped operations with explicit budgets and failure modes.
Most of the “use X instead” answers assume you want more power inside the boundary. This goes the other way: remove power there so the remaining behavior is auditable and explainable.
If that tradeoff doesn’t resonate, it’s probably the wrong tool—and that’s OK.