Data Models¶
Common Types¶
CognitiveFunction
¶
Bases: str, Enum
The 13 LOC cognitive functions.
8 base bands
Thinking, Cognition, Emotion, Attention, Sensation, Feelings, Intuition, Energy
5 compound functions
Reasoning, Understanding, Awareness, Mindfulness, Consciousness
QuestionSet
¶
Bases: str, Enum
Available question set sizes for LOC evaluation.
ScanStatus
¶
Bases: str, Enum
Scan job status.
Comparison Models¶
FunctionDelta
¶
Bases: BaseModel
Change in one cognitive function between two models or conditions.
Attributes:
| Name | Type | Description |
|---|---|---|
function |
CognitiveFunction
|
Which cognitive function changed. |
tc_a |
float
|
TC score of model A (or base model). |
tc_b |
float
|
TC score of model B (or trained model). |
delta |
float
|
tc_b - tc_a (positive = improvement). |
improved |
bool
|
Whether the function improved (delta > 0). |
ModelComparison
¶
Bases: BaseModel
Side-by-side comparison of two cognitive profiles.
Example
comp = loc.compare("Llama-4-Scout", "Llama-3.3-70B") print(comp.summary()) 'Llama-4-Scout wins by +1.2pp overall (improved 8/13 functions)' comp.delta_chart()
summary()
¶
One-line comparison summary.
delta_chart(show=True, save=None, **kwargs)
¶
Display per-function delta bar chart.
side_by_side_radar(show=True, save=None, **kwargs)
¶
Display overlaid radar charts for both models.
save_report(path, fmt='md')
¶
Save comparison report to file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path. |
required |
fmt
|
str
|
Format — "md" (markdown) or "json". |
'md'
|
to_dict()
¶
Export as plain dictionary.
TrainingAudit
¶
Bases: BaseModel
Before/after analysis of what training did to cognitive coherence.
This is the key product for training teams — shows exactly which cognitive functions improved or degraded from training.
Example
audit = loc.training_audit( ... base="Mistral-7B-v0.3", ... trained="Mistral-7B-Instruct-v0.3", ... method="SFT" ... ) audit.save_report("sft_audit.pdf")
Benchmark Models¶
LeaderboardEntry
¶
Bases: BaseModel
Single entry in the LOC leaderboard.
Attributes:
| Name | Type | Description |
|---|---|---|
rank |
int
|
Position in the leaderboard (1 = best). |
model_id |
str
|
HuggingFace model ID. |
model_size |
str
|
Parameter count string. |
architecture |
str
|
Model architecture type. |
tc_score |
float
|
Overall True Coherence %. |
best_function |
str
|
Strongest cognitive function. |
worst_function |
str
|
Weakest cognitive function. |
bottleneck |
str
|
Primary coherence gate bottleneck. |
Leaderboard
¶
BenchmarkResult
¶
Bases: BaseModel
Result of benchmarking multiple models.
Example
results = loc.benchmark(["Llama-4-Scout", "DeepSeek-R1"]) results.leaderboard_table() results.heatmap()