Visual Search Experiment - Pausch et al. (1997) VR Study Replication
This example contains a full SightLab implementation replicating the landmark study:
Quantifying Immersion in Virtual Reality
Randy Pausch, Dennis Proffitt, George Williams (1997)
Study Overview
This experiment replicates the classic visual search paradigm used to measure how immersive interfaces (VR Walking Navigation vs. Desktop Controller) influence search efficiency.
Participants are tasked with finding a specific letter in a room filled with distractors.
Experimental Variables
- Navigation Condition:
- Walking (Physical head-tracked movement)
- Desktop with Controller
- Trial Types:
- Red Y (Target Present, feature pop-out)
- Black Y (Target Present, conjunction search)
- Black Y (Target Absent) (50/50 chance of the target letter not existing)
Dependent Variables
- Reaction Time (RT)
- Fixations/Saccades
- View count sequence structure
- Walk Path Patterns
Response Options
- 1 or 2 keys
- Likert Scale after 10 seconds forcing a choice
How to Run
Requirements
- Vizard 7+
- SightLab Toolkit installed
To Run the Experiment
- Manually input whether the group is
'VR'
or'Desktop'
and whether it is'uncamouflaged (Target letter is red)'
- Modify
experiment.py
with the following values:PRACTICE_TRIALS = 0
→ Set to 0 to skip practiceREAL_TRIALS = 10
→ Number of TrialsCONDITION_GROUP = 'Desktop'
→'Desktop'
or'VR'
UNCAMOUFLAGED = False
→True
orFalse
for camouflaged or uncamouflagedNUM_DISTRACTORS = 85
→ Or any number you wantTARGET_ITEM = 'Y'
→ Target letter or item to findITEM_SET = list('AKMNVWXYZ')
→ List of items including distractorsRESPONSE
= keypressResponse() #Waits for the 1 or 2 key. Option 2- delayedRatingResponse()INSTRUCTIONS
= Can put custom instructions here
- Make sure VR hardware is set up and ready if using the VR condition
- Run
Main_Pausch_Visual_Search
in Vizard or by double-clicking - Follow on-screen instructions
Data Storage
- Experiment Metrics:
Stored in the
/data/
folder using the default SightLab data structure, which includes core gaze and navigation metrics as well as reaction time and response accuracy
Data Analysis
A VisualSearch_Comparison_Analysis.py script is included to analyze performance across conditions (for all sessions in the data folder by default).
It compares:
- Navigation Condition (Walking vs Controller)
- Reaction Time & Accuracy
You can extend the analysis to include view count, dwell time, fixation patterns, and more.
What the Experiment Does
The participant will: 1. Read instructions 2. Complete randomized trials - Each trial displays the target type. 3. Keypress Option: Press '1' if they found the target, '2' if it was absent Likert Option: Wait 10 seconds and forced to make a choice 4. Letter positions and response times are logged
The design replicates the findings from:
Pausch, R., Proffitt, D., & Williams, G. (1997).
Quantifying Immersion in Virtual Reality.
Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques.
SightLab Data Collection
All experiment data is automatically saved in the /data/
folder following SightLab's standard format, which includes:
Core Eye Tracking Metrics
- Timestamp
- Trial Label
- Gaze Intersection (Combined, Left & Right)
- Eye Euler Angles (Combined, Left & Right)
- Head Position (6DOF)
- Fixation / Saccade State
- Saccade Amplitude & Velocity
- Pupil Diameter (device dependent)
- Eye Openness (device dependent)
- Dwell Time
- View Count
- Fixation Count
- Time to First Fixation
- Reaction Time
- Condition & Custom Trial Data
Object-Specific Gaze Data
- Dwell Time per Object
- Fixations per Object
- Total & Average Gaze Duration per Object
- Fixation Sequence Analysis
- AOI (Area of Interest) Metrics
Physiological Correlates (Optional)
When paired with BIOPAC systems: - Electrodermal Activity (EDA) - Heart Rate (HR) - EEG, fNIRS, and others
Extended SightLab Capabilities
This experiment can easily be extended to include: - Rating / Likert Scales - Multi-User Collaborative Search - Real-time Biofeedback Display - Adaptive Trial Difficulty - Custom Event Markers - Gaze Contingent Displays - Replay Heatmaps, Scan Paths, Walk Paths
All of these can be configured using SightLab's GUI or script API.
Video Sample
Full Code
"""
This experiment tests visual search performance under two conditions (VR vs Desktop)
and target-present vs target-absent trials. Optional to have target camouflaged or uncamoflouged
Structure:
1) Instructions
2) Optional: Practice Trials
3) Running Trials (with randomized blocking, stimulus presentation & judgment)
4) Conclusion
"""
import experiment
from experiment import showInstructions, runPracticeTrials, runTrials, endExperiment
from experiment import *
import sightlab_utils.sightlab as sl
from sightlab_utils.settings import *
sightlab = sl.SightLab(gui=False)
sightlab.setStartText(' ')
env = vizfx.addChild('Resources/whiteroom.osgb')
sightlab.setEnvironment(env)
sightlab.setTrialCount(experiment.TOTAL_TRIALS)
def sightLabExperiment():
yield viztask.waitEvent(EXPERIMENT_START)
yield showInstructions(sightlab)
yield runPracticeTrials(sightlab)
yield runTrials(sightlab)
yield endExperiment(sightlab)
viztask.schedule(sightlab.runExperiment)
viztask.schedule(sightLabExperiment)
viz.callback(viz.getEventID('ResetPosition'), sightlab.resetViewPoint)
experiment.py
import viztask
import random
import viz
from Modules.letter_module import createItems, clearLetters, saveLetterPositions
from Modules.response_module import keypressResponse, delayedRatingResponse
PRACTICE_TRIALS = 0 # Set to 0 to skip practice
REAL_TRIALS = 10
CONDITION_GROUP = 'Desktop' #Set to VR or Desktop
UNCAMOUFLAGED = True
NUM_DISTRACTORS = 85 # or whatever number you want
TARGET_ITEM = 'Y'
ITEM_SET = list('AKMNVWXYZ') # Or filenames of 3D models
RESPONSE = keypressResponse() #Waits for the 1 or 2 key
#RESPONSE = delayedRatingResponse() #Waits 10 seconds and shows a rating
TOTAL_TRIALS = PRACTICE_TRIALS + REAL_TRIALS
def chooseTrialType():
if random.random() < 0.5:
if UNCAMOUFLAGED:
return {'target': TARGET_ITEM, 'color': viz.RED, 'present': True, 'uncamouflaged': True}
else:
return {'target': TARGET_ITEM, 'color': viz.BLACK, 'present': True, 'uncamouflaged': False}
else:
return {'target': TARGET_ITEM, 'color': viz.BLACK, 'present': False, 'uncamouflaged': False}
def showInstructions(sightlab):
pass
def runPracticeTrials(sightlab):
if PRACTICE_TRIALS > 0:
for i in range(PRACTICE_TRIALS):
nav_condition = CONDITION_GROUP
trial_type = chooseTrialType()
yield presentStimuli(
sightlab, trial_type, nav_condition, i,
record_data=False, response_handler=RESPONSE
)
# Practice complete message
yield sightlab.showInstructions("Practice complete.\n\nPress Trigger to begin the real trials.")
yield viztask.waitEvent('triggerPress')
def runTrials(sightlab):
for i in range(PRACTICE_TRIALS, TOTAL_TRIALS):
nav_condition = CONDITION_GROUP
trial_type = chooseTrialType()
yield presentStimuli(sightlab, trial_type, nav_condition, i, response_handler=RESPONSE)
def presentStimuli(sightlab, trial_type, nav_condition, trial_index, record_data=True, response_handler=None):
trial_label = 'Practice' if not record_data else nav_condition
trial_type_text = "Practice Trial\n\n" if not record_data else ""
start_text = f"{trial_type_text}Find the Y\n\n Press 1 if present, 2 if not\n\nPress Trigger to Continue "
# ---------------------------
# Fixation or Instruction
# ---------------------------
yield sightlab.startTrial(
startTrialText=start_text,
trialLabel=trial_label,
trackingDataLogging=record_data,
experimentSummaryLogging=record_data,
replayDataLogging=record_data,
timeLineLogging=record_data
)
# ---------------------------
# Stimulus Presentation
# ---------------------------
letters = createItems(trial_type, sightlab, ITEM_SET, NUM_DISTRACTORS)
if record_data:
saveLetterPositions(letters, trial_index + 1)
start_time = sightlab.hud.getCurrentTime()
# ---------------------------
# Judgment (Discrete DVs)
# ---------------------------
response_key = yield response_handler(sightlab)
rt = round(sightlab.hud.getCurrentTime() - start_time, 4)
expected_response = '1' if trial_type['present'] else '2'
correct = (response_key == expected_response)
print(f"{'PRACTICE' if not record_data else 'TRIAL'} {trial_index + 1}: "
f"TargetPresent={trial_type['present']}, "
f"Pressed={response_key}, Expected={expected_response}, Correct={correct}")
if record_data:
sightlab.setExperimentSummaryData('NavigationCondition', nav_condition)
sightlab.setExperimentSummaryData('TargetPresent', trial_type['present'])
sightlab.setExperimentSummaryData('ReactionTime', rt)
sightlab.setExperimentSummaryData('ResponseCorrect', correct)
clearLetters(letters)
yield sightlab.endTrial()
def endExperiment(sightlab):
conclusion_text = "Thank you for participating!"
yield sightlab.endTrial(endExperimentText=conclusion_text)