This document introduces Distributed Neural Architectures (DNAs), a novel approach to neural network design in both vision and language domains. Unlike traditional fixed-architecture models, DNAs allow tokens (or image patches) to follow dynamic, content-dependent paths through a collection of modules like transformers and MLPs. The authors demonstrate that these models are competitive with dense baselines while enabling learnable compute efficiency and parameter sharing. Furthermore, the emergent connectivity and computation patterns within trained DNAs are human-interpretable, revealing specialization among paths and modules, and allowing for dynamic allocation of computational resources based on input data complexity.