mai 14, 2026

THE FRAME

THE FRAME : A method for structured reasoning.

Voici la version corrigée avec le tableau réintégré à sa place, après la description des outils :


THE FRAME — A method for structured reasoning.

Structural safety and alignment for AI requires a prerequisite that no current approach provides: normative propositions must be fully explicit before they can be evaluated for coherence. THE FRAME is the proof of concept that this is executable at scale. It demonstrates that structural rules over case accumulation produces auditable, value-agnostic reasoning — independent of domain, independent of prior value commitments.

The observation

The world is filled with sterile discussions in which everyone tries to impose their point of view and moral values within a biased and incoherent framework. Current technologies not only reinforce this situation but also contribute to cognitive decline by leaving the final word to the machine — especially considering that it is trained on a corpus reflecting human weaknesses and biases.

The question

Is it possible to solve this problem by formalizing a method that is agnostic, so as not to carry human weaknesses, and that would make it possible to produce a result whose value — in terms of coherence and transparency — would be superior to the inputs provided?

The proposal

To reverse this paradigm, it is necessary to reintroduce informal logic into the human–machine loop.

By externalizing the normalization and analysis components from the human, we ensure the efficiency and robustness of the method’s application — particularly through the reduction of biases and inconsistencies. This makes it possible to audit and improve the method and to ensure its standardization over the long term.

This means that the machine must not carry human biases and moral values. The machine must remain agnostic. The machine must not prescribe but instead measure inconsistencies and the coherence of reasoning.

Current monolithic AI models do not allow this approach to be systematized and rely solely on the user’s willingness and competence in prompt engineering.

GRDprocess develops a framework in which transparency and the coherence of reasoning are prioritized, using AI no longer as a prescriber of averages based on training data, but as an analyzer and a tool for measuring deviations from fundamental principles.

It is not a philosophy, not an ideology, not a set of values. It is a method.

It operates on three layers: logical (definitions, categories, contradictions), epistemic (facts vs interpretations vs assumptions), and normative (axioms treated as explicit, adjustable parameters).

The tools

The Normalizer decomposes any normative statement into its structural elements: who acts, what action, on whom, and why. It identifies what is explicit, implicit, or missing, and what the user is trying to achieve.

The Analyzer classifies a normalized prescription against foundational structural principles. It detects which principles are activated, which are implicit, and where ambiguity and incoherence lie. It also makes it possible to build individual commitment stores for subsequent deviation analysis.

The Extractor processes existing documents — policy frameworks, constitutional documents, governance specifications — and extracts prescriptions for normalization and analysis without manual decomposition.

All three tools are operational and live at www.nextinsight.org.

This method can be applied to the following use cases:

Domain Application Value
AI Safety Improve guardrails through dialectical verification Detect incoherence in AI outputs
Policy Analysis Articulate axioms in public debates Reveal real points of disagreement
Conflict Resolution Map apparent vs real disagreements Enable productive dialogue
Education Teach structured reasoning Develop critical thinking
Institutional Decision-Making Audit reasoning processes Improve decision quality

Current results

The methodology has been applied to two public documents. The structural analysis of the Anthropic Claude Constitution (January 2026) identified that 46 of 47 prescriptions systematically prescribe judgment without providing instruments for that judgment — and that the single exception works precisely because it eliminates contextual judgment entirely. The structural analysis of the White House National Policy Framework for Artificial Intelligence (March 2026) identified that justification is explicit in 100% of prescriptions while execution mechanism is absent in 100% — a structural property of the legislative register, not a drafting failure.

Note on principle validation

The reference principles used by the Analyzer are structural inputs, not hardcoded values. Any principle can be introduced into the system provided it is fully normalizable — all elements explicitly bound, no unresolved variables. The validation module evaluates form only, with no access to existing principles or Analyzer outputs. This ensures that the reference framework remains an explicit, auditable, and modifiable parameter rather than an architectural assumption.

In the current implementation, this separation is an architectural intention. Its deterministic enforcement will be addressed through the external dialectical agent component responsible for locking structural rules independently of the LLM layer.