The Macs you already own, as one supercomputer.
Distributed MLX inference and training across heterogeneous Apple Silicon — a model too big for one machine, on hardware that never leaves your building.
Research. Phase 1 working (daemon, CLI, peer discovery, CRDT state, job orchestration). Phase 2 (credits marketplace) is on the roadmap. Modeled at $0 in any commercial planning.
What it does- Memory-weighted scheduling — a 128GB Mac and a 48GB Mac pool their capacity automatically. Pipeline parallelism proportional to each node's unified memory.
- Heterogeneous compute — Metal, CUDA, and CPU nodes coexist in the same mesh. Same coordination layer, different backends.
- Zero-config LAN discovery — libp2p with mDNS, QUIC, WebRTC. Plug in a Mac; the mesh finds it.
- CRDT state — nodes converge automatically. Works offline. No bootstrap server required.
mlx-mesh is R&D. The hard foundations work — peer discovery, CRDT state, memory-weighted scheduling, job orchestration — but significant engineering remains: cross-device latency, heterogeneous compute consistency, marketplace economics. Production use is at your own risk; commercial planning treats it as upside, not revenue.
mlx-mesh sits alongside mlx-go as the distributed-compute arm of the Apple Silicon work. When a model doesn't fit on one Mac — or when a regulated team wants training workloads to stay inside the building — mlx-mesh turns idle devices on the LAN into coordinated capacity. cove is the isolation companion when those training workloads need sandboxing.
source private repo, available for review on request — tmc@tmc.dev
docs in progress
contact tmc@tmc.dev