Skip to content

EEG Research Study Example

A complete multi-subject, multi-task EEG study using AIME LOC with publication-ready outputs.

Study Design

  • Dataset: COG-BCI (Zenodo) — 17 subjects, 5 cognitive tasks
  • Tasks: N-back, MATB, PVT, Flanker, Resting state
  • Goal: Measure TC differences across cognitive tasks

Directory Structure

study/
├── data/
│   ├── sub-01/eeg/
│   │   ├── nback.set
│   │   ├── matb.set
│   │   ├── pvt.set
│   │   ├── flanker.set
│   │   └── rest.set
│   ├── sub-02/eeg/
│   │   └── ...
│   └── ...
├── figures/
├── results/
└── analyze.py

Full Analysis Script

"""analyze.py — Multi-subject EEG LOC analysis."""

from pathlib import Path
import pandas as pd

from aime_loc import LOC
from aime_loc.eeg import EEG
from aime_loc.eeg.viz import cognitive_radar, psd_plot

# ── Setup ──────────────────────────────────────────────
loc = LOC()
eeg = EEG(loc)
data_dir = Path("data")
fig_dir = Path("figures")
fig_dir.mkdir(exist_ok=True)

# ── Load and Score All Recordings ──────────────────────
session = eeg.session()

for subject_dir in sorted(data_dir.glob("sub-*")):
    subject = subject_dir.name
    eeg_dir = subject_dir / "eeg"

    for eeg_file in sorted(eeg_dir.glob("*.set")):
        task = eeg_file.stem
        print(f"Processing {subject} / {task}...")

        try:
            rec = eeg.load(eeg_file)
            rec.preprocess()
            epochs = rec.extract_epochs(duration=2.0)
            session.add(epochs, subject=subject, task=task)
        except Exception as e:
            print(f"  SKIPPED: {e}")

print(f"\nSession: {session}")

# ── Score ──────────────────────────────────────────────
results = eeg.score_session(session)

# ── Summary Table ──────────────────────────────────────
print("\n" + "=" * 60)
results.summary_table()
results.export_csv("results/study_results.csv")

# ── Group Analysis ─────────────────────────────────────
df = pd.read_csv("results/study_results.csv")

# Table 1: Mean TC by task
print("\nTable 1: Mean TC Score by Task")
task_means = df.groupby("task")["tc_score"].agg(["mean", "std", "count"])
task_means.columns = ["Mean TC%", "SD", "N"]
print(task_means.round(2))

# Table 2: Mean TC by subject
print("\nTable 2: Mean TC Score by Subject")
subj_means = df.groupby("subject")["tc_score"].mean().sort_values(ascending=False)
print(subj_means.round(2))

# ── Figures ────────────────────────────────────────────

# Figure 1: Example subject radar chart
profile = results.profiles[0]
cognitive_radar(
    profile,
    show=False,
    save=str(fig_dir / "fig1_example_radar.pdf"),
    journal="nature",
    dpi=600,
)

# Figure 2: Compare rest vs n-back (first subject with both)
rest_profiles = [p for p in results.profiles if p.task == "rest"]
nback_profiles = [p for p in results.profiles if p.task == "nback"]

if rest_profiles and nback_profiles:
    cognitive_radar(
        [rest_profiles[0], nback_profiles[0]],
        title="Rest vs N-Back Cognitive Profile",
        show=False,
        save=str(fig_dir / "fig2_rest_vs_nback.pdf"),
        journal="nature",
        dpi=600,
    )

# ── Per-Subject Exports ───────────────────────────────
for profile in results.profiles:
    name = f"{profile.subject_id}_{profile.task}"
    profile.to_json(f"results/{name}.json")

print(f"\nDone! {results.n_profiles} profiles scored.")
print(f"Results saved to results/")
print(f"Figures saved to figures/")

Running the Analysis

python analyze.py

Expected Output

Processing sub-01 / nback...
Processing sub-01 / rest...
Processing sub-01 / flanker...
Processing sub-02 / nback...
...

Session: EEGSession(45 recordings, 9 subjects, 5 tasks)

============================================================
Subject      Task         TC%      Best Function    Epochs
------------------------------------------------------------
sub-01       nback       23.40%   Attention              150
sub-01       rest        31.20%   Mindfulness            200
...

Table 1: Mean TC Score by Task
         Mean TC%    SD    N
task
flanker     19.80  3.21    9
matb        17.50  4.12    9
nback       22.10  2.87    9
pvt         16.30  3.95    9
rest        29.40  4.56    9

Done! 45 profiles scored.

Writing the Paper

Methods Section Template

Cognitive coherence was assessed using AIME LOC V7 (aime-loc.com). EEG data was loaded, preprocessed (0.5–45 Hz bandpass, 50 Hz notch, average reference), and segmented into 2-second epochs. Power Spectral Density was computed using Welch's method and submitted to the AIME API for True Coherence scoring across 13 cognitive functions. TC is computed using a proprietary server-side algorithm.

Results Section

Table 1 shows mean TC scores by task. Resting state exhibited the highest cognitive coherence (M = 29.4%, SD = 4.56), while PVT showed the lowest (M = 16.3%, SD = 3.95). This pattern aligns with the hypothesis that focused sustained attention tasks disrupt the natural cognitive hierarchy measured by LOC.

Citation

@software{aime_loc,
  title={AIME LOC: Consciousness Research Toolkit for AI \& Human Minds},
  author={AIME Research},
  year={2026},
  url={https://aime-loc.com},
  version={0.2.0}
}