Skip to content

External Application Data Recorder

Download Latest version from here

Overview

The External Application Data Recorder enables you to record, save, and synchronize sensor data while running external VR applications, including SteamVR games, Oculus apps, web-based VR experiences, or also stand-alone/Android based applications (Note: stand alone doesn't include eye tracking). After a session you can run the replay and view the gaze point synchronized with the data. There is also an experimental mode to use AI to look for view counts.

Supports the following headsets/hardware modes:

  • Vive Focus Vision, Meta Quest Pro, Vive Focus 3, Varjo XR-3/XR-4, Omnicept, Vive Pro Eye (select Vive Focus 3 Recorder), Meta Quest 3/3S and 2 (no eye tracking, but just head position), support included for all SteamVR and OpenXR generic headsets, but results may vary.
  • Additionally, supports Screen Based Eye Trackers from Eyelogic and Tobii

Recorded Data

The following data can be recorded during experiments. By default, data is saved to the data folder unless a different directory is specified:

Eye and Gaze Metrics - Eye Gaze Position - Eye Gaze Rotation (combined and per eye) - Gaze Point Overlay - Fixations - Number of Fixations - Saccades: saccade state, amplitude, velocity, peak velocity, average amplitude, average saccadic amplitude, average saccadic velocity, saccadic velocity peak - View Counts and Dwell Time (experimental, using AI analysis)

Physiological and Biometric Data - Pupil Diameter (Varjo, Omnicept, Vive Pro Eye, Pupil Labs) - Eye Openness Value (Varjo, Omnicept, Vive Pro Eye, Vive Focus 3, Vive Focus Vision) - Heart Rate (Omnicept) - Cognitive Load (Omnicept) - Facial Expression Values (Meta Quest Pro)

Positional and Orientation Data - Head Orientation (requires manual alignment if origins differ)

Interaction and Event Data - Interactions (based on selected event trigger) - Custom Flags

Synchronization and Timing - Timestamp (relative to trial start and UNIX timestamp for external synchronization)

Recording and Integration - Gaze Matrix of Eye Rotation - Video Recording
Video recordings can be imported into AcqKnowledge to synchronize visual data with physiological data. - Biopac AcqKnowledge Physiological Data (when connected to Biopac)

Setup

To use this application ensure the following Python libraries are installed:

  • pyautogui
  • pywin32
  • pygetwindow
  • opencv-python
  • mss
  • numpy(comes with SightLab installation)
  • For the Vive Focus Vision, Vive Focus 3 and Vive Pro Eye you will need the SR Anipal driver Download here
  • K-Lite Codec Pack installed (can get it from here)

You can install these via the Package Manager. You will also need to install Tkinter from the Vizard installer.


How to Run

  1. Open the desired VR application on your PC.
  2. Edit the Data_Recorder_Config.py file if using Biopac, saving Face Tracking data, or wanting to adjust other options.
  3. Start the Data_Recorder_External_Application script (or for screen based eye trackers use the Screen_Based folder and run External_Application_Data_Recorder_Screen_Based)
  4. Choose the title of your application from the dropdown (this will show all open windows)
  5. Choose how long you want to record (if USE_TIMER_DROPDOWN is set to True)
  6. Choose your hardware
  7. Click on the Data Recorder window to make it active and press Spacebar to begin recording. The recording will stop automatically after the timer that you set. To quit early at any time Alt+Tab back to the Data Recorder window and press SPACEBAR.
  8. To view a replay, run DataRecorderReplay.py, which will display gaze point data on the virtual screen.
  9. data files are stored in the data folder, as well as original video in the recordings folder.
  10. Press the "1" key in the replay to record a video with the gaze point (will need to let this play through in real time). Press "2" to stop. This will save in the "replay_recordings" folder. Open the recorded video in AcqKnowledge to see the synchronization with physiological data. See here for how to do that.
  11. Use "Auto AI View Detection" to get object views (experimental)

Note: It is good practice to run a practice test by looking at a few spots in a scene in your application, then running the replay to verify the gaze point is accurate. If it seems off you can fine tune the position of the gaze point in Data_Recorder_Config.py (REPLAY_SCREEN_CALIBRATION moves the gaze point in X,Y,Z while the VIEWPOINT_POSITION and EULER settings move the virtual screen’s position within the window).

Configuration Settings

# ===== Biopac & Network Sync Settings =====
BIOPAC_ON = True  # Communicate with Biopac Acqknowledge
LOCK_TRANSPORT = True  # Lock the transport
NETWORK_SYNC_KEY = 't'  # Key to send event marker to Acqknowledge
NETWORK_SYNC_EVENT = 'triggerPress'  # Event trigger for marker
USE_NETWORK_EVENT = False  # Send network event to external app

# ===== Recording & Data Logging =====
RECORD_VIDEO_OF_PLAYBACK = True  # Record playback session
RECORD_GAZE_MAT = True  # Save gaze matrix data
RECORD_FACE_TRACKER_DATA = False  # Save facial expression data
RECORD_VIDEO = True  # Start/Stop video recording

# ===== Replay Calibration & View Adjustment =====
REPLAY_SCREEN_CALIBRATION = [-0.4, 0.2, 0.0]
VIEWPOINT_ADJUSTMENT_EULER = [0, 4, 0]
VIEWPOINT_ADJUSTMENT_POSITION = [-0.3, 0.1, 0]
STRETCH_FACTOR = 1.25

FOLLOW_ON = True  # Turn on first-person view in Replay by default

# ===== Timer & Session Control =====
USE_TIMER = True  # Use timer instead of keypress
USE_TIMER_DROPDOWN = True  # Enable dropdown for timer length
DEFAULT_TIMER_LENGTH = 10  # Default timer length (sec)
START_END_SESSION_KEY = ' '  # Key to stop/start trial
PLAY_END_SOUND = True  # Play sound at end of trial/session
You can also set a trial Label that will go into the data files by adding one on the line in the code where it has yield sightlab.startTrial() and change it to something like:

yield sightlab.startTrial('Custom Label Here')

Auto AI View Detection (Experimental)

Required python libraries

Openai (also requires openai api key, placed in a text file named key.txt in the root directory).

Opencv

Note: The individual image analysis can use up a lot of tokens. See token pricing here for openai https://openai.com/api/pricing/

Step 1: Run External Application and the External Application Data Recorder

Run the External Data Recorder and save the data.

Step 2: Run Replay and save video with gaze point overlay

  1. Start the replay script.
  2. Press ‘1’ to start recording and ‘2’ to stop.

This recorded video will be saved in a folder called “replay_recordings” and will include the gaze point. This will be used for the following steps.

Step 3: Extract Frames from the Video

Run convert video to images.py

This script will create a series of images, named sequentially (e.g., frame_0000.png, frame_0001.png, etc.) from the given video file.

Step 4: Automate Image Upload and Object Views

  • Run Auto_AI_View_Detection.py
  • Output is saved as a text file in the root directory as openai_response_date_time.txt
  • Can ask follow up questions in the Follow_Up_Questions.py script (on line 33). The AI responses will be saved to the "Analysis_Answers" folder and if SPEAK_RESPONSE = True the answers can also be spoken back in real time.

Considerations for Dwell Time Detection

Dwell time detection involves checking if the same object is identified in consecutive frames for a minimum duration. In the above script:

  • dwell_time_threshold determines how many frames constitute a "dwell time".
  • When the object identified by GPT remains the same across consecutive frames and exceeds the threshold, it is counted as a successful dwell event. A possible value could be 15 frames, so if the video is 30 fps, that would be a half a second for counting a “view” or “dwell time”.

Optimizations and Challenges

  • Reducing Frame Rate: Depending on the frame rate of the video, processing every single frame might be overkill. Consider sampling every n frames (e.g., every 5th frame).
  • Efficient API Usage: API calls can be costly and time-consuming. Instead of uploading all frames, use conditions to narrow down which frames need detailed analysis.

Additional Extended Features

  • Rating/Likert Scales & Surveys
    Easily collect participant feedback. Customize scale labels and capture responses programmatically and in data exports. Ratings must be collected before or after the external session.

  • Inputs/Demographics
    Gather participant data (e.g., age, ID, gender) before starting the external session.

  • Adding a Label/Condition
    Tag sessions with experimental conditions for sorting and analysis.

  • Flags, Network Events, Button Clicks
    Enable logging of custom triggers (e.g., spacebar presses, network signals) during the session for synchronized event tracking.

  • Speech Recognition (optional)
    Record microphone input for later analysis or transcription.

  • Transcriptions
    Combine mic recordings with post-session transcription tools to create searchable dialogue data.

  • Instructions
    Show instructions or display guidance on the mirrored desktop before launching the external app.

  • Plotly for Additional Data Analysis
    Replay session data with built-in Plotly tools to visualize gaze, movement, and behavioral metrics.

  • Face Tracking and Expression Analysis
    Automatically capture facial expressions with supported headsets (e.g., Meta Quest Pro) if enabled in the config.

  • Screen-Based Eye Trackers
    Use Tobii screen-based trackers for gaze logging during desktop-based external applications.

  • Average Physiological Data
    Biopac integration allows tracking and averaging of heart rate, skin conductance, and cognitive load throughout the session.

  • Baseline
    Record a short “resting” or neutral task before launching the external app to establish baseline physiological readings.

Limitations and Tips

  • Window not Switching: If the application fails to put the other application’s window into focus, you may need to use Alt+Tab to manually switch the window. Another option is to use SteamVR to run the application and then use “Display VR View” to mirror the window and use that as the window to run the application on.
  • Stand Alone applications need to using casting (make sure to press F11 to Fullscreen the browser). Start casting on the Meta headset from settings, then go to https://www.oculus.com/casting/. This then will just use head position for the gaze point and replay, also will synchronize with the physiological data if using Acqknowledge.
  • Face tracking data can be found in the data folder and visualized using the facial_expressions_over_time.py script.
  • Gaze point position is not going to be accurate in  the Z axis, as it's only hitting a collide box but not objects in the scene a user is actually looking at
  • Head orientation (Meta Quest Pro) would need to be manually aligned if not using the same origin starting point
  • Not able to tag regions or objects of interest (experimental option to use AI to auto-recognize objects but can't manually tag)
  • For headsets that go through SteamVR (Omnicept, Vive Focus 3, Vive Pro Eye) you will not get head orientation (but will show in the replay with the virtual screen)
  • Scrubbing through the Session Replay uses B and N or C and V to skip. Dragging the slider doesn't move the video
  • Note: For Vive Focus Vision and Vive Focus 3, need to use the SRAnipal driver instead of OpenXR. Download here (You don't need to run the Vive Console, just Vive Streaming, but the Vive Console software has to be installed to have access to the SRAnipal driver)
  • To verify that the eye tracker is working, recommend to run SightLab_VR.py first and then press 'p' to see your gaze point moving.
  • If running with SteamVR, minimize the SteamVR window first so that doesn't show on top of the video
  • Can also leverage the Plotly visualization example to view a walk path and gaze path (requires the Plotly Python library)
  • The eye tracking may need to be first calibrated on the device, and then can also calibrate the replay screen by focusing on a point in the scene and moving the REPLAY_SCREEN_CALIBRATION in the Config File until the point matches up if it is off.
  • Can uncomment out the specific viewpoint position values in the Data_Recorder_Config.py file using Shift+Alt+3 and then comment out the ones you're not using (using Alt +3). These are preset for the Meta Quest Pro, Vive Focus Vision and Desktop.

|