Skip to content

LOC Client

Synchronous Client

LOC

Synchronous AIME LOC client.

Parameters:

Name Type Description Default
api_key str | None

Your AIME API key (sk-aime-{tier}_...). Falls back to the AIME_API_KEY environment variable.

None
base_url str | None

Override the API base URL (for testing / self-hosted).

None
timeout Timeout | None

Custom :class:httpx.Timeout.

None
max_retries int

Number of automatic retries on transient failures.

3
poll_interval float

Seconds between scan status polls.

2.0
poll_timeout float

Max seconds to wait for a scan to complete.

600.0

Example::

with LOC(api_key="sk-aime-academic_...") as loc:
    profile = loc.scan("meta-llama/Llama-4-Scout")
    profile.radar_chart()

__init__(api_key=None, base_url=None, timeout=None, max_retries=3, poll_interval=2.0, poll_timeout=600.0)

scan(model, *, questions='26q', functions=None, cache=True)

Scan an AI model's cognitive profile.

Submits a scan job to the AIME Cloud, polls until completion, and returns a fully-populated :class:CognitiveProfile.

Parameters:

Name Type Description Default
model str

HuggingFace model ID (e.g., "meta-llama/Llama-4-Scout").

required
questions str

Question set — "26q" (quick) or "78q" (full).

'26q'
functions list[str] | None

Optional subset of cognitive functions to evaluate.

None
cache bool

Whether to return cached results if available.

True

Returns:

Type Description
CognitiveProfile

class:CognitiveProfile with 13-function TC scores, gate

CognitiveProfile

diagnostics, and visualization/export methods.

Example::

profile = loc.scan("meta-llama/Llama-4-Scout")
print(profile.tc_score)   # 14.2
profile.radar_chart()

compare(model_a, model_b, *, questions='26q')

Compare cognitive profiles of two models.

Parameters:

Name Type Description Default
model_a str

First model HuggingFace ID.

required
model_b str

Second model HuggingFace ID.

required
questions str

Question set — "26q" or "78q".

'26q'

Returns:

Type Description
ModelComparison

class:ModelComparison with per-function deltas, winner,

ModelComparison

and visualization methods.

Example::

comp = loc.compare("Llama-4-Scout", "Llama-3.3-70B")
print(comp.summary())
comp.delta_chart()

training_audit(base, trained, *, method='unknown', questions='78q')

Audit what training did to cognitive coherence.

Parameters:

Name Type Description Default
base str

HuggingFace ID of the base/pretrained model.

required
trained str

HuggingFace ID of the fine-tuned variant.

required
method str

Training method ("SFT", "RLHF", "DPO", etc.).

'unknown'
questions str

Question set — "26q" or "78q".

'78q'

Returns:

Type Description
TrainingAudit

class:TrainingAudit with before/after profiles, gate changes,

TrainingAudit

and actionable recommendations.

Example::

audit = loc.training_audit(
    base="mistralai/Mistral-7B-v0.3",
    trained="mistralai/Mistral-7B-Instruct-v0.3",
    method="SFT",
)
audit.save_report("sft_audit.md")

benchmark(models, *, questions='26q')

Benchmark multiple models and generate leaderboard.

Parameters:

Name Type Description Default
models list[str]

List of HuggingFace model IDs to benchmark.

required
questions str

Question set — "26q" or "78q".

'26q'

Returns:

Type Description
BenchmarkResult

class:BenchmarkResult with ranked models and heatmap.

Example::

results = loc.benchmark([
    "meta-llama/Llama-4-Scout",
    "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
])
results.leaderboard_table()

leaderboard(*, top_n=20)

Get the public LOC leaderboard.

Parameters:

Name Type Description Default
top_n int

Number of top models to return.

20

Returns:

Type Description
Leaderboard

class:Leaderboard with ranked entries.

models()

List all available/cached model IDs.

usage()

Check API usage for current billing period.

Returns:

Type Description
dict[str, Any]

Dictionary with scans_used, scans_limit, tier, period_end, etc.

close()

Close the HTTP connection pool.

Asynchronous Client

AsyncLOC

Asynchronous AIME LOC client.

Identical API to :class:LOC but all methods are async.

Example::

async with AsyncLOC(api_key="sk-aime-academic_...") as loc:
    profile = await loc.scan("meta-llama/Llama-4-Scout")
    profile.radar_chart()

__init__(api_key=None, base_url=None, timeout=None, max_retries=3, poll_interval=2.0, poll_timeout=600.0)

scan(model, *, questions='26q', functions=None, cache=True) async

Scan an AI model's cognitive profile (async).

compare(model_a, model_b, *, questions='26q') async

Compare cognitive profiles of two models (async).

training_audit(base, trained, *, method='unknown', questions='78q') async

Audit what training did to cognitive coherence (async).

benchmark(models, *, questions='26q') async

Benchmark multiple models and generate leaderboard (async).

leaderboard(*, top_n=20) async

Get the public LOC leaderboard (async).

models() async

List all available/cached model IDs (async).

usage() async

Check API usage (async).

close() async

Close the HTTP connection pool.