avril 5, 2026

RESEARCH

Research Reports

Working documents from the GRDprocess research program. These reports document experimental protocols, empirical results, and tool development across two active research axes.

THE FRAME operates at the upstream layer of normative analysis: detecting where prescriptions lack executable structure, before evaluation or oversight is applied. Normative specifications — constitutions, guidelines, alignment documents — prescribe what an AI system should produce as judgment. They cannot, by design, contain the instruments for producing that judgment. This is not a drafting failure: it is a structural property of how normative documents function. When the standard itself is structurally underspecified, better measurement tools do not fix the problem — they measure against a broken baseline.

AATM (Autonomous Artificial Thinking Machine) investigates a complementary problem: whether autonomous multi-agent systems operating from independent homeostatic state variables can reveal structural properties of AI architectures that are invisible in their output traces. The dialectic prototype series documents a concrete instance — reasoning traces converging while internal state variables diverge — directly relevant to the interpretability monitoring problem.

The interpretability problem that GRDprocess can act on is narrow and specific: detecting structural ambiguity in normative instructions before they reach a model or agent. Everything else is either a workflow engineering failure — instructions that were not properly specified or verified — or the deeper problem of implicit unstructured rules embedded in LLM alignment, which no current interpretability tool resolves. The interpretability axis at GRDprocess focuses on the first problem. The second remains an open research question.

The Frame
AATM
Interpretability
Project
POC
Insight