Skip to content

External Application Data Recorder

Download latest version here

Overview

The External Application Data Recorder enables you to record, save, and synchronize sensor data while running external VR applications, including SteamVR games, Oculus apps, web-based VR experiences, or stand-alone/Android based applications (Note: stand alone doesn't include eye tracking). After a session you can run the replay and view the gaze point synchronized with the data. There is an experimental mode to use AI to look for view counts.

Supports the following headsets/hardware modes:

  • Vive Focus Vision, Meta Quest Pro, Vive Focus 3, Varjo XR-3/XR-4, Omnicept, Vive Pro Eye (select Vive Focus 3 Recorder), Meta Quest 3/3S and 2 (no eye tracking, but just head position), support included for all SteamVR and OpenXR generic headsets, but results may vary.
  • Additionally, supports Screen Based Eye Trackers from Eyelogic and Tobii
  • Note: As of the SightLab 2.5.5 version there is now the option for real time streaming of the eye tracking overlay

Recorded Data

The following data can be recorded during experiments. By default, data is saved to the data folder unless a different directory is specified. See here for full list.

Eye and Gaze Metrics
- Eye Gaze Position
- Eye Gaze Rotation (combined and per eye)
- Gaze Point Overlay
- Fixations
- Number of Fixations
- Saccades: saccade state, amplitude, velocity, peak velocity, average amplitude, average saccadic amplitude, average saccadic velocity, saccadic velocity peak
- View Counts and Dwell Time (experimental, using AI analysis)

Physiological and Biometric Data
- Pupil Diameter (Varjo, Omnicept, Vive Pro Eye, Pupil Labs)
- Eye Openness Value (Varjo, Omnicept, Vive Pro Eye, Vive Focus 3, Vive Focus Vision)
- Heart Rate (Omnicept)
- Cognitive Load (Omnicept)
- Facial Expression Values (Meta Quest Pro)

Positional and Orientation Data
- Head Orientation (requires manual alignment if origins differ)

Interaction and Event Data
- Interactions (based on selected event trigger)
- Custom Flags

Synchronization and Timing
- Timestamp (relative to trial start and UNIX timestamp for external synchronization)

Recording and Integration
- Gaze Matrix of Eye Rotation
- Video Recording
Video recordings can be imported into AcqKnowledge to synchronize visual data with physiological data, or with Acqknowledge 6.02 the Acqknowledge playback graph can directly be controlled via the SightLab Replay slider.
- Biopac AcqKnowledge Physiological Data (when connected to Biopac)

Setup

To use this application ensure the following Python libraries are installed:

  • pyautogui
  • pygetwindow
  • opencv-python (included now with SightLab)
  • To install all of these open the Vizard IDE, go to Tools- Package Manager, Click on the CMD tab and can copy and paste this text: install pyautogui pygetwindow mss opencv-python
  • For the Vive Focus Vision, Vive Focus 3 and Vive Pro Eye you will need the SR Anipal driver Download here
  • K-Lite Codec Pack installed (can get it from here)

You can install these via the Package Manager


How to Run

  1. Open the desired VR application on your PC.
  2. Edit the Data_Recorder_Config.py file if using Biopac, saving Face Tracking data, or wanting to adjust other options.
  3. Start the Data_Recorder_External_Application script (or for screen based eye trackers use the Screen_Based folder and run External_Application_Data_Recorder_Screen_Based)
  4. Choose the title of your application from the dropdown (this will show all open windows)
  5. Choose how long you want to record (if USE_TIMER_DROPDOWN is set to True)
  6. Choose your hardware
  7. Click on the Data Recorder window to make it active and press Spacebar to begin recording. The recording will stop automatically after the timer that you set. To quit early at any time Alt+Tab back to the Data Recorder window and press SPACEBAR. If REAL_TIME_STREAMING is set to true you can also see a real time view of the gaze point. Since this is a beta feature this is off by default and you can see the gaze point overlay in the replay.
  8. data files are stored in the data folder, as well as original video in the recordings folder. ****
  9. To view a replay, run DataRecorderReplay.py, which will display gaze point data on the virtual screen. On the replay, press the "." key to hide the information panel and use "b" and "n" or "c" and "v" to scrub or fast-forward/rewind the playback. SPACE to toggle pause/playback, and "r" to replay from the beginning. Note: If the replay crashes, you may need to close the original application first.
  10. Press the "1" key in the replay to record a video with the gaze point (will need to let this play through in real time). Press "2" to stop. This will save in the "replay_recordings" folder. Open the recorded video in AcqKnowledge to see the synchronization with physiological data. See here for how to do that.
  11. Use "Auto AI View Detection" to get object views (experimental, see below)

Note: It is good practice to run a practice test by looking at a few spots in a scene in your application, then running the replay to verify the gaze point is accurate. If it seems off you can fine tune the position of the gaze point in Data_Recorder_Config.py (REPLAY_SCREEN_CALIBRATION moves the gaze point in X,Y,Z while the VIEWPOINT_POSITION and EULER settings move the virtual screen’s position within the window).

Toggle visualizations such as scan path, fixation spheres and the heatmap in the replay. Note: You may need to toggle the items first to see the visualizations (for heatmap may need to toggle "Occlusion" and move the intensity slider). Suggested to use the "Gradient" heatmap as that shows one that fades over time. Visualizations other than gaze point not available for Stand Alone Android Based Applications.

Configuration Settings

# ===== Biopac & Network Sync Settings =====
BIOPAC_ON = True  # Communicate with Biopac Acqknowledge
SAVE_BIOPAC_AVERAGE = False # Needs Biopac to also be True
LOCK_TRANSPORT = True  # Lock the transport
NETWORK_SYNC_KEY = 't'  # Key to send event marker to Acqknowledge
NETWORK_SYNC_EVENT = 'triggerPress'  # Event trigger for marker
USE_NETWORK_EVENT = False  # Send network event to external app

# ===== Recording & Data Logging =====
RECORD_VIDEO_OF_PLAYBACK = True  # Record playback session
RECORD_GAZE_MAT = True  # Save gaze matrix data
RECORD_FACE_TRACKER_DATA = False  # Save facial expression data
RECORD_VIDEO = True  # Start/Stop video recording

# ===== Replay Calibration & View Adjustment =====
REPLAY_SCREEN_CALIBRATION = [-0.4, 0.2, 0.0]
VIEWPOINT_ADJUSTMENT_EULER = [0, 4, 0]
VIEWPOINT_ADJUSTMENT_POSITION = [-0.3, 0.1, 0]
STRETCH_FACTOR = 1.25

FOLLOW_ON = True  # Turn on first-person view in Replay by default

# ===== Timer & Session Control =====
USE_TIMER = True  # Use timer instead of keypress
USE_TIMER_DROPDOWN = True  # Enable dropdown for timer length
DEFAULT_TIMER_LENGTH = 10  # Default timer length (sec)
START_END_SESSION_KEY = ' '  # Key to stop/start trial
PLAY_END_SOUND = True  # Play sound at end of trial/session
TRIAL_CONDITION = 'A'  #This can be a custom label that can go into your data file
#Video Recording Size
VIDEO_RECORDING_WINDOW_HEIGHT= viz.setOption('viz.AVIRecorder.maxWidth','1920')
VIDEO_RECORDING_WINDOW_WIDTH= viz.setOption('viz.AVIRecorder.maxHeight','1080')

REAL_TIME_STREAMING = False #Set to true to see a gaze point overlay in real time (beta feature)

You can also set a trial Label that will go into the data files by editing the TRIAL_CONDITION constant

Auto AI View Detection

Required python libraries

Openai Requires openai api key, placed in a text file named key.txt in the root directory for earlier versions for latest version requires setting a key using this method:

  • In windows search type "cmd" enter setx OPENAI_API_KEY "your-api-key", setx GEMINI_API_KEY "your-api-key",
  • setx ELEVENLABS_API_KEY "your-api-key", setx ANTHROPIC_API_KEY "your-api-key"
  • Restart Vizard
  • With this method you don't need to keep the keys in a folder in your project and your api keys can be accessed from any folder.

Note: The individual image analysis can use up a lot of tokens. See token pricing here for openai https://openai.com/api/pricing/

Step 1: Run External Application and the External Application Data Recorder

Run the External Data Recorder and save the data.

  1. Start the replay script.
  2. Press ‘1’ to start recording and ‘2’ to stop.

This recorded video will be saved in a folder called replay_recordings and will include the gaze point. This will be used for the following steps.

Step 2: Run AOI_Tracker_Tool (new as of SightLab 2.5)

Controls:
p or SPACE → pause/resume playback
s → select a new ROI on the paused frame (must press the spacebar to pause first)

ENTER or SPACE - Confirm the current ROI (after pressing "s")

q → quit and export results (note: have to press 'q' to escape as clicking to close window won't work)

c → step backward one frame (when paused)
v → step forward one frame (when paused)

Selected ROIs will be tracked from the frame they are defined onwards,
using template matching with a correlation threshold.

Exported video will be saved in roi_videos_data

Outputs:

  • tracked_multiple.avi : annotated video with bounding boxes
  • tracked_boxes.csv : per-frame ROI detections and scores

Step 3: Extract Frames from the Video

(Note: for prior to SightLab 2.5 need to manually copy the exported video from the last step to the replay_videos folder in Auto AI View Detection)

Run convert video to images.py

Select the video you just exported (latest should be on top, or just underSample_ROIs)

This script will create a series of images, named sequentially (e.g., frame_0000.png, frame_0001.png, etc.) from the given video file.

Step 4: Automate Image Upload and Object Views

  • Run Auto_AI_ROI_View_Detection.py or Auto_AI_View_Detection.py
  • Output is saved as a text file in the root directory as openai_response_date_time.txt
  • Can ask follow up questions in the Follow_Up_Questions.py script (on line 33). The AI responses will be saved to the "Analysis_Answers" folder and if SPEAK_RESPONSE = True the answers can also be spoken back in real time.

Considerations for Dwell Time Detection

Dwell time detection involves checking if the same object is identified in consecutive frames for a minimum duration. In the above script:

  • dwell_time_threshold determines how many frames constitute a "dwell time".
  • When the object identified by GPT remains the same across consecutive frames and exceeds the threshold, it is counted as a successful dwell event. A possible value could be 15 frames, so if the video is 30 fps, that would be a half a second for counting a “view” or “dwell time”.

Optimizations and Challenges

  • Reducing Frame Rate: Depending on the frame rate of the video, processing every single frame might be overkill. Consider sampling every n frames (e.g., every 5th frame).
  • Efficient API Usage: API calls can be costly and time-consuming. Instead of uploading all frames, use conditions to narrow down which frames need detailed analysis.

Additional Extended Features

  • Rating/Likert Scales & Surveys
    Easily collect participant feedback. Customize scale labels and capture responses programmatically and in data exports. Ratings must be collected before or after the external session.

  • Inputs/Demographics
    Gather participant data (e.g., age, ID, gender) before starting the external session.

  • Adding a Label/Condition
    Tag sessions with experimental conditions for sorting and analysis.

  • Flags, Network Events, Button Clicks
    Enable logging of custom triggers (e.g., spacebar presses, network signals) during the session for synchronized event tracking.

  • Speech Recognition (optional)
    Record microphone input for later analysis or transcription.

  • Transcriptions
    Combine mic recordings with post-session transcription tools to create searchable dialogue data.

  • Instructions
    Show instructions or display guidance on the mirrored desktop before launching the external app.

  • Plotly for Additional Data Analysis
    Replay session data with built-in Plotly tools to visualize gaze, movement, and behavioral metrics.

  • Face Tracking and Expression Analysis
    Automatically capture facial expressions with supported headsets (e.g., Meta Quest Pro) if enabled in the config.

  • Screen-Based Eye Trackers
    Use Tobii screen-based trackers for gaze logging during desktop-based external applications.

  • Average Physiological Data
    Biopac integration allows tracking and averaging of heart rate, skin conductance, and cognitive load throughout the session.

  • Baseline
    Record a short “resting” or neutral task before launching the external app to establish baseline physiological readings.

Limitations and Tips

  • Window not Switching: If the application fails to put the other application’s window into focus, you may need to use Alt+Tab to manually switch the window. Another option is to use SteamVR to run the application and then use “Display VR View” to mirror the window and use that as the window to run the application on (Note: Don't use VR View for Real Time Streaming as it will override the view)
  • Replay has a screen that is in a weird looking aspect ratio or colors or off: Install K-lite codec pack from https://codecguide.com/download_kl.htm
  • Stand Alone applications need to using casting (make sure to press F11 to Fullscreen the browser). Start casting on the Meta headset from settings, then go to https://www.oculus.com/casting/. This then will just use head position for the gaze point and replay, also will synchronize with the physiological data if using Acqknowledge. In the Meta Universal menu will need to go to settings- Camera - Cast.
  • Choose "Desktop" for the hardware choice for the casting method.

  • Face tracking data can be found in the data folder and visualized using the facial_expressions_over_time.py script.

  • Gaze point position is not going to be accurate in  the Z axis, as it's only hitting a collide box but not objects in the scene a user is actually looking at
  • Head orientation (Meta Quest Pro) would need to be manually aligned if not using the same origin starting point
  • Not able to tag regions or objects of interest (experimental option to use AI to auto-recognize objects but can't manually tag)
  • For headsets that go through SteamVR (Omnicept, Vive Focus 3, Vive Pro Eye) you will not get head orientation (but will show in the replay with the virtual screen)
  • Scrubbing through the Session Replay uses B and N or C and V to skip. Dragging the slider doesn't move the video
  • Note: For Vive Focus Vision and Vive Focus 3, need to use the SRAnipal driver instead of OpenXR. Download here (You don't need to run the Vive Console, just Vive Streaming, but the Vive Console software has to be installed to have access to the SRAnipal driver)
  • To verify that the eye tracker is working, recommend to run SightLab_VR.py first and then press 'p' to see your gaze point moving.
  • If running with SteamVR, minimize the SteamVR window first so that doesn't show on top of the video
  • Can also leverage the Plotly visualization example to view a walk path and gaze path (requires the Plotly Python library)
  • The eye tracking may need to be first calibrated on the device, and then can also calibrate the replay screen by focusing on a point in the scene and moving the REPLAY_SCREEN_CALIBRATION in the Config File until the point matches up if it is off.
  • Can uncomment out the specific viewpoint position values in the Data_Recorder_Config.py file using Shift+Alt+3 and then comment out the ones you're not using (using Alt +3). These are preset for the Meta Quest Pro, Vive Focus Vision and Desktop.


|

Was this page helpful?