Kotodama proxy: NCA-002 vs DD-v1

What does the NCA geometric fingerprint + AttnRes depth infrastructure change about generation?
NCA-002 (300M NCA tokens, no AttnRes, Muon 0.02, 6B language tokens FineWeb-Edu) — The NCA baseline. Preserves L14 geometry (SR q=63) but L27 is dead (SR q=26, entropy 5.7 frozen). Gradient 6-12x L0-heavy.
DD-v1 (750M NCA tokens + co-trained AttnRes [0,3,7,12,21,25], Muon 0.02, 6B language tokens FineWeb-Edu) — The full stack. Preserves L14 (SR q=66), healthy L27 (SR q=46, entropy 2.4), balanced gradients, functional attention at all 28 layers.

Both achieve similar loss (~2.90-2.94 nats). Temperature 0.7, no top-p, 1024 tokens max. NCA-002 has 1 sample per prompt; DD-v1 has 5.