Training Audits¶
Training audits show exactly what a training method (SFT, RLHF, DPO, distillation) did to a model's cognitive coherence.
Basic Audit¶
from aime_loc import LOC
loc = LOC()
audit = loc.training_audit(
base="mistralai/Mistral-7B-v0.3",
trained="mistralai/Mistral-7B-Instruct-v0.3",
method="SFT",
)
print(f"Base TC: {audit.base_profile.tc_score:.2f}%")
print(f"Trained TC: {audit.trained_profile.tc_score:.2f}%")
print(f"Delta: {audit.comparison.overall_delta:+.2f}pp")
Coherence Changes¶
The audit provides a summary of how training affected overall cognitive coherence. Detailed analysis is computed server-side.
Recommendations¶
The audit includes actionable recommendations:
Export Report¶
Use 78Q for Audits¶
Training audits default to 78Q for publication-quality results: