Skip to content

Training Audits

Training audits show exactly what a training method (SFT, RLHF, DPO, distillation) did to a model's cognitive coherence.

Basic Audit

from aime_loc import LOC

loc = LOC()
audit = loc.training_audit(
    base="mistralai/Mistral-7B-v0.3",
    trained="mistralai/Mistral-7B-Instruct-v0.3",
    method="SFT",
)

print(f"Base TC:    {audit.base_profile.tc_score:.2f}%")
print(f"Trained TC: {audit.trained_profile.tc_score:.2f}%")
print(f"Delta:      {audit.comparison.overall_delta:+.2f}pp")

Coherence Changes

The audit provides a summary of how training affected overall cognitive coherence. Detailed analysis is computed server-side.

print(f"Overall TC delta: {audit.comparison.overall_delta:+.2f}pp")

Recommendations

The audit includes actionable recommendations:

for rec in audit.recommendations:
    print(f"  • {rec}")

Export Report

audit.save_report("sft_audit.md")          # Markdown
audit.save_report("sft_audit.json", fmt="json")  # JSON

Use 78Q for Audits

Training audits default to 78Q for publication-quality results:

# Default: 78Q (recommended for audits)
audit = loc.training_audit(base="...", trained="...", method="DPO")

# Quick check: 26Q
audit = loc.training_audit(base="...", trained="...", method="DPO", questions="26q")