Skip to content

Adaptive learning

Adaptive Learning Overview

Two templates showing how to adjust a stimulus parameter (e.g., display duration) based on both success/failure and reaction time (RT). One is a sample for training a machine learning model based on participant success, the other is a simple 1-up/1-down staircase method.

Machine Learning Version

The machine‐learning models update online (i.e., trial by trial) with each new data point. Uses scikit-learn

https://scikit-learn.org/stable/

Note: This example is not enough data for reliable prediction modeling, but it's a great proof-of-concept or starter demo for adaptive behavior.


1. Dependencies & Models

  • Python Libraries

  • NumPy (array handling)

  • scikit-learn

    • SGDClassifier: simple logistic classifier for “success vs. fail”
    • SGDRegressor: simple linear regressor for “parameter → RT”
  • How It Works (High Level)

  • Classifier learns, on every trial:

    [parameter]    {0 or 1} (fail or success)
    

    so it estimates “If I show for X seconds, what’s P(success)?”

  • Regressor learns, on success trials only:

    [parameter]  →  RT (in seconds)
    

    so it estimates “If they succeed with X, how fast do they react?”

  • Online Learning: After each trial, both models are updated immediately with the new (parameter, outcome) and (parameter, RT) data.


2. Adaptive Loop (High Level)

  1. Present Stimulus using the current parameter (e.g., show for 3.0 s).
  2. Record:
  3. success = True/False
  4. rt = measured reaction time (or None if failed)
  5. Update Classifier with (current_param → success).
  6. Update Regressor (only if success) with (current_param → rt).
  7. Select Next Parameter:
  8. Generate candidate values from MIN to MAX in STEP increments.
  9. Filter out any candidate whose predicted P(success) < SUCCESS_P (e.g., 0.70).
  10. Among the remaining, choose the one whose predicted RT is closest to your target_rt (e.g., 1.0 s).
  11. Repeat for each trial.

3. Console Printout Example

Each trial prints:

Prev=3.00, Success=True, RT=1.18  Next=2.50, P=1.00, PredRT=1.00
  • Prev=3.00: last trial’s parameter (3 s).
  • Success=True: participant succeeded.
  • RT=1.18: measured RT (1.18 s).
  • Next=2.50: chosen parameter for next trial.
  • P=1.00: classifier’s predicted success probability at 2.50 s (≥ threshold).
  • PredRT=1.00: regressor’s predicted RT at 2.50 s (closest to target).

4. Extracted Code Snippet

Copy and paste this ML-only logic into your own experiment—replace present_stimulus(...) and measure_rt() with your code:

import numpy as np
from sklearn.linear_model import SGDClassifier, SGDRegressor

# ── 1) Setup ─────────────────────────────────────────────
classifier = SGDClassifier(loss="log_loss", learning_rate="constant", eta0=0.5)
regressor  = SGDRegressor(learning_rate="constant", eta0=0.1)

fitted_cls = False
fitted_reg = False

MIN_PARAM   = 1.0         # e.g., seconds
MAX_PARAM   = 5.0
STEP        = 0.25
SUCCESS_P   = 0.70        # require ≥70% success
TARGET_RT   = 1.0         # desired RT (seconds)

current_param = 3.0       # initial value

# ── 2) Trial Loop ───────────────────────────────────────
for trial in range(total_trials):
    # a) Present stimulus for current_param and record outcome
    present_stimulus(current_param)
    success = get_success()      # True/False
    rt      = get_reaction_time() if success else None

    # b) Update Classifier on (current_param → success)
    X_cls = np.array([[current_param]])
    y_cls = np.array([1 if success else 0])
    if not fitted_cls:
        classifier.partial_fit(X_cls, y_cls, classes=[0,1])
        fitted_cls = True
    else:
        classifier.partial_fit(X_cls, y_cls)

    # c) Update Regressor (only if success) on (current_param → rt)
    if success and rt is not None:
        X_rt = np.array([[current_param]])
        y_rt = np.array([rt])
        if not fitted_reg:
            regressor.partial_fit(X_rt, y_rt)
            fitted_reg = True
        else:
            regressor.partial_fit(X_rt, y_rt)

    # d) Choose next parameter
    best_p, best_diff = 0.0, float("inf")
    next_param = current_param
    chosen_pred_rt = None

    for t in np.arange(MIN_PARAM, MAX_PARAM + STEP, STEP):
        p_success = classifier.predict_proba([[t]])[0][1]
        if p_success < SUCCESS_P:
            continue
        if fitted_reg:
            pred_rt = regressor.predict([[t]])[0]
            diff    = abs(pred_rt - TARGET_RT)
        else:
            pred_rt, diff = None, 0.0

        # Prefer higher p_success; if tied, smaller |pred_rt - TARGET_RT|
        if (p_success > best_p) or (p_success == best_p and diff < best_diff):
            best_p, best_diff = p_success, diff
            next_param        = t
            chosen_pred_rt    = pred_rt

    print(
        f"Prev={current_param:.2f}, Success={success}, RT="
        f"{rt if rt is not None else 'miss'} "
        f"→ Next={next_param:.2f}, P={best_p:.2f}, "
        f"PredRT={chosen_pred_rt if chosen_pred_rt is not None else 'N/A'}"
    )

    current_param = next_param
  • present_stimulus(current_param): your code to show whatever you need for current_param.
  • get_success(): return True if participant succeeded, else False.
  • get_reaction_time(): return RT in seconds (only if get_success() was True).

Paste this snippet into your trial sequence


1-up/1-down staircase Method

On hit: duration decreases by one STEP

On miss: duration increases by one STEP

HUD (statusText) now shows the updated duration and result each trial.

How to Adapt as a Template

  • Swap Stimulus: Any object, image, sound, or prompt. Record success and RT accordingly.
  • Adjust Ranges: Set your own minimum/maximum and step size for the parameter.
  • Set Threshold & Target: Choose a success‐probability threshold (e.g., 0.70) and a target RT (e.g., 1 s).
  • Optional RT Model: If you only care about success, skip the RT model and pick the smallest value that meets your success threshold.

Over successive trials, this approach “homes in” on a parameter value that balances high success rates and your desired reaction speed.

Biofeedback-Based Adaptation (e.g., Biopac)

Use Biopac signals (e.g., heart rate, EDA) to influence trial difficulty or flow:

def outputToScreen(index, frame, channelsInSlice):
    stress_val = frame[0]  # e.g., EDA or heart rate
    trial_difficulty = "easy" if stress_val > threshold else "hard"
    sightlab.setCustomTrialData(str(stress_val), "Biopac_StressValue")
    # Dynamically change object visibility, timing, etc.

This allows adaptive difficulty or trial termination/extension based on real-time physiological markers.


Additional Adaptation Ideas Using Existing SightLab Tools

  1. Facial Expression Input (e.g., Affectiva SDK): Classify emotion (happy, frustrated) → modify task feedback or select appropriate stimulus.

  2. Body Position & Proximity Sensors:

  3. Use vizproximity to start or end trials.

  4. Dynamically relocate objects based on user position or posture.

  5. Eye Metrics-Based Adaptation:

  6. Real-time gaze deviation, pupil dilation → adjust time pressure or attention-grabbing visuals.

  7. Combined Gaze + Biofeedback:

  8. Use gaze dwell time and elevated skin conductance to determine stress → branch the learning path.

  9. Adaptive Rating Scales:

  10. Combine with sightlab.showRatings():

    yield sightlab.showRatings('How confident were you?', ratingScale=["Low", "Med", "High"])
    if sightlab.ratingChoice == "Low":
        # Trigger simplified explanation or extra guidance
    
Was this page helpful?