The Macs you already own, as one supercomputer.

Distributed MLX inference and training across heterogeneous Apple Silicon — a model too big for one machine, on hardware that never leaves your building.

Research. Phase 1 working (daemon, CLI, peer discovery, CRDT state, job orchestration). Phase 2 (credits marketplace) is on the roadmap. Modeled at $0 in any commercial planning.

What it does

mlx-mesh is R&D. The hard foundations work — peer discovery, CRDT state, memory-weighted scheduling, job orchestration — but significant engineering remains: cross-device latency, heterogeneous compute consistency, marketplace economics. Production use is at your own risk; commercial planning treats it as upside, not revenue.

mlx-mesh sits alongside mlx-go as the distributed-compute arm of the Apple Silicon work. When a model doesn't fit on one Mac — or when a regulated team wants training workloads to stay inside the building — mlx-mesh turns idle devices on the LAN into coordinated capacity. cove is the isolation companion when those training workloads need sandboxing.