Skip to content

Cross-Substrate Comparison

The most powerful feature of AIME LOC: comparing human brain activity and AI model activations using the same 13-function cognitive framework.

The Key Insight

Both AI models and human brains produce information-processing patterns that can be decomposed into the same 13 cognitive functions. The LOC framework provides a common language:

  • AI models: Functions mapped from transformer layer activations (proprietary mapping)
  • Human EEG: Functions mapped from frequency-domain power (proprietary mapping)
  • Same scoring: The same proprietary True Coherence algorithm applied to both substrates

This enables the first direct comparison of cognitive coherence between silicon and biological minds.

Basic Comparison

from aime_loc import LOC
from aime_loc.eeg import EEG

loc = LOC()
eeg = EEG(loc)

# Score a human recording
recording = eeg.load("subject01.set")
recording.preprocess()
epochs = recording.extract_epochs()
human_profile = eeg.score(epochs, subject="Human (sub-01)")

# Score an AI model
llm_profile = loc.scan("meta-llama/Llama-4-Scout")

# Print side-by-side
print(f"Human TC: {human_profile.tc_score:.2f}%")
print(f"LLM TC:   {llm_profile.tc_score:.2f}%")

Overlay Radar Chart

from aime_loc.eeg.viz import cognitive_radar

cognitive_radar(
    [human_profile, llm_profile],
    title="Human vs AI Cognitive Profile",
    save="cross_substrate_radar.png",
    journal="nature",
    dpi=300,
)

This produces a publication-ready 13-axis radar with both profiles overlaid, showing where human and AI cognitive patterns converge and diverge.

Per-Function Comparison

human_scores = human_profile.tc_by_function()
llm_scores = llm_profile.tc_by_function()

print(f"{'Function':<16} {'Human':>8} {'LLM':>8} {'Delta':>8}")
print("-" * 44)
for func in human_scores:
    h = human_scores[func]
    l = llm_scores.get(func, 0.0)
    delta = h - l
    marker = "+" if delta > 0 else ""
    print(f"{func:<16} {h:>7.2f}% {l:>7.2f}% {marker}{delta:>7.2f}%")

Multi-Subject vs Multi-Model

from pathlib import Path

# Score multiple humans
session = eeg.session()
for f in Path("data/").glob("sub-*/eeg/rest.set"):
    rec = eeg.load(f)
    rec.preprocess()
    epochs = rec.extract_epochs()
    session.add(epochs, subject=f.parent.parent.name, task="rest")

human_results = eeg.score_session(session)

# Score multiple models
models = [
    "meta-llama/Llama-4-Scout",
    "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
    "Qwen/Qwen3.5-35B-A3B",
]
llm_results = loc.benchmark(models)

# Compare mean profiles
human_mean_tc = sum(p.tc_score for p in human_results.profiles) / len(human_results.profiles)
llm_mean_tc = sum(p.tc_score for p in llm_results.profiles) / len(llm_results.profiles)

print(f"Human mean TC: {human_mean_tc:.2f}%")
print(f"LLM mean TC:   {llm_mean_tc:.2f}%")

What Cross-Substrate Differences Mean

Typical observations from the AIME research:

Pattern Interpretation
Human TC > LLM TC Biological brains naturally exhibit more hierarchical cognitive structure
LLM higher on Thinking/Cognition Models excel at structured information processing
Human higher on Emotion/Feelings Biological substrates show richer affective processing
Similar Awareness profiles Both substrates integrate information across functions similarly
Human coherence diagnostics higher Brain activity follows natural cognitive structure more consistently

Research Caveat

Cross-substrate comparisons should be interpreted carefully. While the same framework is applied, the underlying signals (layer activations vs. frequency power) are fundamentally different substrates. The comparison reveals structural similarities in information processing patterns, not equivalence of the underlying mechanisms.

Publication-Ready Export

# Save both profiles as JSON
human_profile.to_json("supplementary/human_sub01.json")
llm_profile.to_json("supplementary/llama4_scout.json")

# LaTeX table
print("Human:")
print(human_profile.to_latex())
print("\nLLM:")
print(llm_profile.to_latex())

# Radar figure for paper
from aime_loc.eeg.viz import cognitive_radar
cognitive_radar(
    [human_profile, llm_profile],
    show=False,
    save="fig5_cross_substrate.pdf",
    journal="nature",
    dpi=600,
)

Next Steps