Skip to content

MCP Integration

AIME LOC works as an MCP server, letting any AI agent (Claude, GPT, custom agents) measure model cognitive profiles as a tool call.

Installation

pip install aime-loc[mcp]

Configure in Claude Code

Add to your Claude Code MCP settings (~/.claude/settings.json):

{
  "mcpServers": {
    "aime-loc": {
      "command": "aime-loc-mcp",
      "env": {
        "AIME_API_KEY": "sk-aime-academic_..."
      }
    }
  }
}

Available Tools

Once configured, AI agents can call:

scan_model

Scan an AI model's cognitive profile.

scan_model(model_id="meta-llama/Llama-4-Scout", questions="26q")

Returns: Full cognitive profile with TC scores and per-function breakdown.

compare_models

Compare two models side-by-side.

compare_models(model_a="Llama-4-Scout", model_b="DeepSeek-R1")

Returns: Per-function deltas, winner, improved/degraded functions.

get_leaderboard

Get the public LOC leaderboard.

get_leaderboard(top_n=20)

Returns: Ranked list of models with TC scores.

training_audit

Audit what training did to cognitive coherence.

training_audit(base_model="Mistral-7B", trained_model="Mistral-7B-Instruct", training_method="SFT")

Returns: Before/after profiles, coherence changes, recommendations.

Run Standalone

AIME_API_KEY=sk-aime-... aime-loc-mcp

Custom Integration

from aime_loc.mcp.server import main
main()  # Starts the MCP server