Skip to content

Visual Search

Location

Find these templates in the ExampleScripts-VisualSearch directory.

Installation Note

Before you begin, ensure you have installed the numpy, matplotlib, and PANDAS python libraries if you want to use the additional visualization tools. Install them via the Package Manager: navigate to Tools -> Package Manager.

Overview of Visual Search Tasks

This suite includes four distinct Visual Search tasks. Each task can be run as-is or customized to suit your needs. Key features include:

Task Description

In this task, participants are instructed to locate a specified target object amidst a series of objects. Upon successfully finding the target, an auditory cue is triggered, and the object is highlighted. Post-task, you can access raw data files, interactive session replays, and, if using Biopac Acqknowledge, analyze physiological data.

Running the Task

  • Execute VisualSearch_GUI.py to start the task and select options.
  • Run SessionReplay.py to view the replay of the session.

Customization Guide

To tailor this task to your specific requirements, follow these steps:

  • Insert Your Environment Model: Place your chosen environment model into the resources folder or any preferred directory. See here for places in which to source models: Getting 3D models and Assets
  • Select the Target Object:

    • Open your model in the Inspector tool.
    • Designate your target object. This can be an existing object in your scene or a new one added via File -> Add.
    • Configure the Target Area:

    • Overlay a region of interest on your target object.

    • Rename this region to the desired target name.
    • Update Target Object in Code:

    • In the code, locate the line target_object = 'StarryNightPainting'.

    • Replace 'StarryNightPainting' with your target object's name.
    • Further Modifications:

    • Feel free to directly alter the source code for additional customizations.

    • Leverage the extensive SightLab template library to enhance interactivity and functionality.

    Task 2: Visual Search Single Object - Experimenting with Variable Manipulation

(No Code version in VisualSearch1_SingleObject_GUI)

Task Overview

'Visual Search Single Object' offers a focused approach to demonstrating experimental control in a visual search task. This setup is particularly valuable for:

  • Setting up experimental conditions using a STIM file.
  • Triggering specific events, like ending a trial when the target is found.
  • Configuring constants through a configuration file.
  • Recording experimental data for analysis with tools like matplotlib and PANDAS.
  • Analyzing a condition and measurement comparison

  • Can be configured and created completely with no code or using a STIM file

Experiment Dynamics

  • Independent Variables: Object size and position, which model to use for each trial. These variables can be modified via the STIM and config file.
  • Dependent Variable: Time taken by the participant to find the target, serving as a key metric for assessing the effects of the independent variables.

Task Description

In this task, participants are placed in an environment with a singular object—the target. The challenge is to locate this object, whose size and position or the object itself (independent variables) vary across trials. This variation is crucial for examining how changes in object characteristics influence the participant's time (dependent variable) to locate the target. Can be added to an environment with an array of objects as well. 

Data Collection and Analysis

Post-session, the data is stored in data. Analyze this data using the SingleObject_Analysis tools to explore the correlation between the object's size and position and the time taken to find it.

Customizing the Experiment with Code

  • Environment and Target Object Setup:

    • Open your environment in the Inspector tool.
    • Add your target object using File -> Add.
    • Adjust the object's position and size for each trial using translate, scale, and rotate tools.
    • Right-click on the object to copy its position and dimension values, then paste them into the config file.
    • Also update the TARGET_OBJECT and TARGET_OBJECT_NAME, as well as the name in the STIM_FILE (SIngleObject_StimFile folder) if using multiple objects

Configuring Experimental Conditions

Tailor these parameters in the configuration file to design your experiment:

NUMBER_OF_TRIALS = 3
RANDOMIZE = True
USE_SINGLE_OBJECT = True
SHOW_INSTRUCTIONS = True
ENVIRONMENT_MODEL = "sightlab_resources/example_resources/dojo_clear.osgb"
# ENVIRONMENT_MODEL = 'sightlab_resources/environment/dojo2.osgb'
STIM_FILE_LOCATION = "SingleObject_StimFile/stim_file_simple.csv"
SIZE_VARIABLE = "object size"
POSITION_VARIABLE = "object position"
MODEL_VARIABLE = "object model"
target_object_size = {"small": [0.5, 0.5, 0.5], "medium": [2, 2, 2], "large": [4, 4, 4]}
target_object_position = {
    "position1": {"coordinates": [0, 1, 2], "euler": ([0, 0, 0])},
    "position2": {"coordinates": [0, 1.5, 3], "euler": ([0, 0, 0])},
    "position3": {"coordinates": [1, 0, 2], "euler": ([0, 0, 0])},
}
target_object_models = {
    "basketball": "sightlab_resources/objects/basketball.osgb",
    "volleyball": "sightlab_resources/objects/volleyball.osgb",
    "soccerball": "sightlab_resources/objects/soccerball.osgb",
}

if USE_SINGLE_OBJECT:
    TARGET_OBJECT = vizfx.addChild("sightlab_resources/objects/basketball.osgb")
    TARGET_OBJECT_NAME = "basketball"
else:
    TARGET_OBJECT = [
        vizfx.addChild(model_path) for model_path in target_object_models.values()
    ]
    TARGET_OBJECT_NAME = list(target_object_models.keys())

Update these settings to experiment with various object sizes and positions. + Use the STIM file to modify which condition happens per trial * Environment, Target Object, and File Paths:

+ Define the paths for your environment model, target object, and STIM file as required.

Customizing the Experiment with No Code

  • Set up 3 conditions in Inspector (can use Create- External Reference to not have to have a new environment model for every condition, but rather reference one file)
  • Run GUI configurator and choose each condition per trial
  • Label the condition
  • Use end condition of Gaze time with the target object

Customizing Single Object GUI

Either make a copy of the project folder or replace the models for "condition1", "condition2", etc. with your own environment and target object. In the GUI interface, click Modify and change the object name for the end trial condition. 

### Task 3: Visual Search Multiple Objects -

Task 3: Analyzing Choice and Confidence

Task Overview

In the 'Visual Search Multiple Objects' task, participants are engaged in an environment where they must discern a target object based on size differences. This task is designed to study participant choices and confidence levels under varying conditions, and provides rich data for analysis.

Task Dynamics

  • Challenge: Among a series of objects, one object differs in size (either larger or smaller).
  • Participant Interaction:

    • Object Selection: Use a highlighter tool to select the object they believe is different.
    • Confidence Rating: After selection, rate their confidence in their choice.
    • Variable Conditions: Both the number of objects presented and their sizes can be varied, offering a diverse range of experimental setups.

Data Analysis and Visualization

  • Post-Experiment Tools: Engage with an interactive session replay, including heatmaps and scan paths.
  • Data Insights: Analyze charts that correlate confidence levels, object size, the number of objects, and the time taken to identify the target.
  • Comprehensive Heatmaps: Load and view data from specific conditions to gain deeper insights.

Customization Options

Customize the experiment by adjusting the following parameters in Config_Visual_Search2.py:

  • Number of Trials: NUMBER_OF_TRIALS (Set the total number of trials to run)
  • Number of Objects: NUM_OBJECTS (Define how many objects appear in each trial)
  • Object Size Variability: target_object_size (Choose to vary the size of the target object or use another criterion for differentiation)
  • Environment Model: ENVIRONMENT (Specify the environment in which the task takes place)
  • Target Objects: TARGET_OBJECT and DIFFERENT_TARGET_OBJECT (Designate the regular and different-sized target objects)
  • Biopac Integration: BIOPAC_ON (Toggle to use Biopac Acqknowledge for physiological data integration)
  • STIM File Path: STIM_FILE_PATH (Provide the path to your STIM file)

Data Storage

  • All experimental data is saved in data.

Controls

  • Desktop:

    • Activate highlighter: Right Mouse Button.
    • Confirm selection: Left Mouse Button.
    • Rate confidence: Arrow Keys.
    • Head-Mounted Display (HMD):

    • Activate highlighter: Grip Button.

    • Confirm selection: Trigger.
    • Rate confidence: Left and Right Thumbstick.

Regular controls for navigation and interaction within the environment also apply.

Task 4: Visual Search Randomize - Timed and Standard Challenges

Overview

The 'Visual Search Randomize' task (Visual_Search_Randomize.py), initially set in an art gallery environment (modifiable as per your needs), offers a dynamic visual search experience. In each trial, participants are immersed in a virtual environment scattered with various objects. Their primary goal is to locate a specific target object. The task stands out for its randomization of object locations in each trial, introducing both consistency and variability in the search challenge.

Experimental Conditions

  • Standard Visual Search:

    • Objective: Find the target object without time constraints.
    • Data Recording: Time taken to find the target is logged in a CSV file, along with the trial number. This data serves as a key indicator of visual search efficiency.
    • Timed Visual Search:

    • Objective: Locate the target within a 10-second window. The trial concludes when the target is found or the time limit is reached.

    • Data Recording: Similar to the Standard condition, the time to locate the target is recorded.

In both conditions, finding the target triggers an auditory alarm, and the trial concludes 3 seconds thereafter.

Post-Trial Data Analysis

  • Data Files: Access trial data in data.
  • Charts: Utilize charts generated via matplotlib in VisualSearchCharts for detailed analysis.
  • Visualization Tools: Run visualization_of_data to compare the two conditions or visualization_of_data_combined for a comprehensive comparison across all data files and conditions.

Customization Options (in the Config_Visual_Search_Randomize.py file) 

  • Environment Customization: ENVIRONMENT (Specify any virtual environment as per your experiment's theme)
  • Target Object Specification: TARGET\_OBJECT\_NAME (Define the exact name of your target object; default set to 'GEODE' from scene objects). Also change this in the STIM file (see below)
  • Object Randomization: OBJECTS_TO_RANDOMIZE (List the names of objects to be randomized in each trial)
  • Target Title Assignment: TARGET_TITLE (Assign a specific name for the target object)

Additionally, modify the STIM File in ArtGallery_StimFile/stim_file_visual_search.txt to change the condition, target object and duration of the timed trials

Data Saving Mechanism

  • Format: All trial data, including timings and trial numbers, are saved in a CSV file format for ease of analysis and record-keeping.