This experiment replicates the classic visual search paradigm used to measure how immersive interfaces (VR Walking Navigation vs. Desktop Controller) influence search efficiency.
Participants are tasked with finding a specific letter in a room filled with distractors.
Experimental Variables
Navigation Condition:
Walking (Physical head-tracked movement)
Desktop with Controller
Trial Types:
Red Y (Target Present, feature pop-out)
Black Y (Target Present, conjunction search)
Black Y (Target Absent) (50/50 chance of the target letter not existing)
Dependent Variables
Reaction Time (RT)
Fixations/Saccades
View count sequence structure
Walk Path Patterns
Response Options
1 or 2 keys
Likert Scale after 10 seconds forcing a choice
How to Run
Requirements
Vizard 7+
SightLab Toolkit installed
To Run the Experiment
Manually input whether the group is 'VR' or 'Desktop' and whether it is 'uncamouflaged (Target letter is red)'
Modify experiment.py with the following values:
PRACTICE_TRIALS = 0 → Set to 0 to skip practice
REAL_TRIALS = 10 → Number of Trials
CONDITION_GROUP = 'Desktop' → 'Desktop' or 'VR'
UNCAMOUFLAGED = False → True or False for camouflaged or uncamouflaged
NUM_DISTRACTORS = 85 → Or any number you want
TARGET_ITEM = 'Y' → Target letter or item to find
ITEM_SET = list('AKMNVWXYZ') → List of items including distractors
RESPONSE = keypressResponse() #Waits for the 1 or 2 key. Option 2- delayedRatingResponse()
INSTRUCTIONS = Can put custom instructions here
Make sure VR hardware is set up and ready if using the VR condition
Run Main_Pausch_Visual_Search in Vizard or by double-clicking
Follow on-screen instructions
Data Storage
Experiment Metrics:
Stored in the /data/ folder using the default SightLab data structure, which includes core gaze and navigation metrics as well as reaction time and response accuracy
Data Analysis
A VisualSearch_Comparison_Analysis.py script is included to analyze performance across conditions (for all sessions in the data folder by default).
It compares:
Navigation Condition (Walking vs Controller)
Reaction Time & Accuracy
You can extend the analysis to include view count, dwell time, fixation patterns, and more.
What the Experiment Does
The participant will:
1. Read instructions
2. Complete randomized trials
- Each trial displays the target type.
3. Keypress Option: Press '1' if they found the target, '2' if it was absent
Likert Option: Wait 10 seconds and forced to make a choice
4. Letter positions and response times are logged
The design replicates the findings from:
Pausch, R., Proffitt, D., & Williams, G. (1997).
Quantifying Immersion in Virtual Reality. Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques.
SightLab Data Collection
All experiment data is automatically saved in the /data/ folder following SightLab's standard format, which includes:
Core Eye Tracking Metrics
Timestamp
Trial Label
Gaze Intersection (Combined, Left & Right)
Eye Euler Angles (Combined, Left & Right)
Head Position (6DOF)
Fixation / Saccade State
Saccade Amplitude & Velocity
Pupil Diameter (device dependent)
Eye Openness (device dependent)
Dwell Time
View Count
Fixation Count
Time to First Fixation
Reaction Time
Condition & Custom Trial Data
Object-Specific Gaze Data
Dwell Time per Object
Fixations per Object
Total & Average Gaze Duration per Object
Fixation Sequence Analysis
AOI (Area of Interest) Metrics
Physiological Correlates (Optional)
When paired with BIOPAC systems:
- Electrodermal Activity (EDA)
- Heart Rate (HR)
- EEG, fNIRS, and others
Extended SightLab Capabilities
This experiment can easily be extended to include:
- Rating / Likert Scales
- Multi-User Collaborative Search
- Real-time Biofeedback Display
- Adaptive Trial Difficulty
- Custom Event Markers
- Gaze Contingent Displays
- Replay Heatmaps, Scan Paths, Walk Paths
All of these can be configured using SightLab's GUI or script API.
Video Sample
Full Code
"""This experiment tests visual search performance under two conditions (VR vs Desktop)and target-present vs target-absent trials. Optional to have target camouflaged or uncamoflougedStructure:1) Instructions2) Optional: Practice Trials3) Running Trials (with randomized blocking, stimulus presentation & judgment)4) Conclusion"""importexperimentfromexperimentimportshowInstructions,runPracticeTrials,runTrials,endExperimentfromexperimentimport*importsightlab_utils.sightlabasslfromsightlab_utils.settingsimport*sightlab=sl.SightLab(gui=False)sightlab.setStartText(' ')env=vizfx.addChild('Resources/whiteroom.osgb')sightlab.setEnvironment(env)sightlab.setTrialCount(experiment.TOTAL_TRIALS)defsightLabExperiment():yieldviztask.waitEvent(EXPERIMENT_START)yieldshowInstructions(sightlab)yieldrunPracticeTrials(sightlab)yieldrunTrials(sightlab)yieldendExperiment(sightlab)viztask.schedule(sightlab.runExperiment)viztask.schedule(sightLabExperiment)viz.callback(viz.getEventID('ResetPosition'),sightlab.resetViewPoint)
experiment.py
importviztaskimportrandomimportvizfromModules.letter_moduleimportcreateItems,clearLetters,saveLetterPositionsfromModules.response_moduleimportkeypressResponse,delayedRatingResponsePRACTICE_TRIALS=0# Set to 0 to skip practiceREAL_TRIALS=10CONDITION_GROUP='Desktop'#Set to VR or DesktopUNCAMOUFLAGED=TrueNUM_DISTRACTORS=85# or whatever number you wantTARGET_ITEM='Y'ITEM_SET=list('AKMNVWXYZ')# Or filenames of 3D modelsRESPONSE=keypressResponse()#Waits for the 1 or 2 key#RESPONSE = delayedRatingResponse() #Waits 10 seconds and shows a ratingTOTAL_TRIALS=PRACTICE_TRIALS+REAL_TRIALSdefchooseTrialType():ifrandom.random()<0.5:ifUNCAMOUFLAGED:return{'target':TARGET_ITEM,'color':viz.RED,'present':True,'uncamouflaged':True}else:return{'target':TARGET_ITEM,'color':viz.BLACK,'present':True,'uncamouflaged':False}else:return{'target':TARGET_ITEM,'color':viz.BLACK,'present':False,'uncamouflaged':False}defshowInstructions(sightlab):passdefrunPracticeTrials(sightlab):ifPRACTICE_TRIALS>0:foriinrange(PRACTICE_TRIALS):nav_condition=CONDITION_GROUPtrial_type=chooseTrialType()yieldpresentStimuli(sightlab,trial_type,nav_condition,i,record_data=False,response_handler=RESPONSE)# Practice complete messageyieldsightlab.showInstructions("Practice complete.\n\nPress Trigger to begin the real trials.")yieldviztask.waitEvent('triggerPress')defrunTrials(sightlab):foriinrange(PRACTICE_TRIALS,TOTAL_TRIALS):nav_condition=CONDITION_GROUPtrial_type=chooseTrialType()yieldpresentStimuli(sightlab,trial_type,nav_condition,i,response_handler=RESPONSE)defpresentStimuli(sightlab,trial_type,nav_condition,trial_index,record_data=True,response_handler=None):trial_label='Practice'ifnotrecord_dataelsenav_conditiontrial_type_text="Practice Trial\n\n"ifnotrecord_dataelse""start_text=f"{trial_type_text}Find the Y\n\n Press 1 if present, 2 if not\n\nPress Trigger to Continue "# ---------------------------# Fixation or Instruction# ---------------------------yieldsightlab.startTrial(startTrialText=start_text,trialLabel=trial_label,trackingDataLogging=record_data,experimentSummaryLogging=record_data,replayDataLogging=record_data,timeLineLogging=record_data)# ---------------------------# Stimulus Presentation# ---------------------------letters=createItems(trial_type,sightlab,ITEM_SET,NUM_DISTRACTORS)ifrecord_data:saveLetterPositions(letters,trial_index+1)start_time=sightlab.hud.getCurrentTime()# ---------------------------# Judgment (Discrete DVs)# ---------------------------response_key=yieldresponse_handler(sightlab)rt=round(sightlab.hud.getCurrentTime()-start_time,4)expected_response='1'iftrial_type['present']else'2'correct=(response_key==expected_response)print(f"{'PRACTICE'ifnotrecord_dataelse'TRIAL'}{trial_index+1}: "f"TargetPresent={trial_type['present']}, "f"Pressed={response_key}, Expected={expected_response}, Correct={correct}")ifrecord_data:sightlab.setExperimentSummaryData('NavigationCondition',nav_condition)sightlab.setExperimentSummaryData('TargetPresent',trial_type['present'])sightlab.setExperimentSummaryData('ReactionTime',rt)sightlab.setExperimentSummaryData('ResponseCorrect',correct)clearLetters(letters)yieldsightlab.endTrial()defendExperiment(sightlab):conclusion_text="Thank you for participating!"yieldsightlab.endTrial(endExperimentText=conclusion_text)