External Application Data Recorder
Download Latest version from here
Overview
The External Application Data Recorder enables you to record, save, and synchronize sensor data while running external VR applications, including SteamVR games, Oculus apps, web-based VR experiences, or also stand-alone/Android based applications (Note: stand alone doesn't include eye tracking). After a session you can run the replay and view the gaze point synchronized with the data. There is also an experimental mode to use AI to look for view counts.
Supports the following headsets/hardware modes:
- Vive Focus Vision, Meta Quest Pro, Vive Focus 3, Varjo XR-3/XR-4, Omnicept, Vive Pro Eye, Meta Quest 3/3S and 2 (no eye tracking, but just head position), support included for all SteamVR and OpenXR generic headsets, but results may vary.
Data
List of some of the data that can be recorded (will be saved to the data folder, unless a different directory is chosen):
- Eye Gaze Position
- Eye gaze rotation (combined and per eye)
- Gaze Point Overlay
- Fixations
- Number of Fixations
- Saccades (saccade state, amplitude, velocity, peak velocity, average amplitude, Avg Saccadic Amplitude, Avg Saccadic Velocity, Saccadic Velocity Peak)
- View Counts and Dwell Time (experimental using AI analysis)
- Head orientation (needs to be manually aligned if not at same origin point)
- Pupil Diameter (Varjo, Omnicept, Vive Pro Eye, Pupil Labs)
- Eye Openness Value (Varjo, Omnicept, Vive Pro Eye, Vive Focus 3, Vive Focus Vision)
- Heart Rate (Omnicept)
- Cognitive Load (Omnicept)
- Interactions using the chosen event trigger
- Time Stamp (from trial start and UNIX timestamp for synchronizing with other data streams)
- Gaze mat of eye rotation
- Video Recording
- All Biopac Acqknowledge physiological data if connecting to Biopac
- Facial Expression values (Meta Quest Pro)
-
Custom Flags
- Video recording can be imported to Acqknowledge to see how it matches up with physiological data
Setup
To record video, ensure the following Python libraries are installed:
pyautogui
pywin32
pygetwindow
opencv-python
mss
numpy
(comes with SightLab installation)
You can install these via the Package Manager. You will also need to install Tkinter from the Vizard installer.
How to Run
- Open the desired VR application on your PC.
- Edit the
Data_Recorder_Config.py
file if using Biopac, saving Face Tracking data, or wanting to adjust other options. - Start the
Data_Recorder_External_Application
script. - Choose the title of your application from the dropdown (this will show all open windows)
- Choose how long you want to record (if USE_TIMER_DROPDOWN is set to True)
- Choose your hardware
- Click on the Data Recorder window to make it active and press Spacebar to begin recording. The recording will stop automatically after the timer that you set. To quit early at any time Alt+Tab back to the Data Recorder window and press SPACEBAR.
- To view a replay, run
DataRecorderReplay.py
, which will display gaze point data on the virtual screen. - data files are stored in the
data
folder, as well as original video in therecordings
folder. - Press the "1" key in the replay to record a video with the gaze point (will need to let this play through in real time). Press "2" to stop. This will save in the "replay_recordings" folder. Open the recorded video in AcqKnowledge to see the synchronization with physiological data. See here for how to do that.
- Use "Auto AI View Detection" to get object views (experimental)
Configuration Settings
# ===== Biopac & Network Sync Settings =====
BIOPAC_ON = False # Communicate with Biopac Acqknowledge
LOCK_TRANSPORT = True # Lock the transport
NETWORK_SYNC_KEY = 't' # Key to send event marker to Acqknowledge
NETWORK_SYNC_EVENT = 'triggerPress' # Event trigger for marker
USE_NETWORK_EVENT = False # Send network event to external app
# ===== Recording & Data Logging =====
RECORD_VIDEO_OF_PLAYBACK = True # Record playback session
RECORD_GAZE_MAT = True # Save gaze matrix data
RECORD_FACE_TRACKER_DATA = False # Save facial expression data
RECORD_VIDEO = True # Start/Stop video recording
# ===== Replay Calibration & View Adjustment =====
REPLAY_SCREEN_CALIBRATION = [-0.4, 0.2, 0.0]
VIEWPOINT_ADJUSTMENT_EULER = [0, 4, 0]
VIEWPOINT_ADJUSTMENT_POSITION = [-0.3, 0.1, 0]
STRETCH_FACTOR = 1.25
FOLLOW_ON = True # Turn on first-person view in Replay by default
# ===== Timer & Session Control =====
USE_TIMER = True # Use timer instead of keypress
USE_TIMER_DROPDOWN = True # Enable dropdown for timer length
DEFAULT_TIMER_LENGTH = 10 # Default timer length (sec)
START_END_SESSION_KEY = ' ' # Key to stop/start trial
PLAY_END_SOUND = True # Play sound at end of trial/session
Auto AI View Detection (Experimental)
Required python libraries
Openai
(also requires openai api key, placed in a text file named key.txt in the root directory).
Opencv
Note: The individual image analysis can use up a lot of tokens. See token pricing here for openai https://openai.com/api/pricing/
Step 1: Run External Application and the External Application Data Recorder
Run the External Data Recorder and save the data.
Step 2: Run Replay and save video with gaze point overlay
- Start the replay script.
- Press ‘1’ to start recording and ‘2’ to stop.
This recorded video will be saved in a folder called “replay_recordings” and will include the gaze point. This will be used for the following steps.
Step 3: Extract Frames from the Video
Run convert video to images.py
This script will create a series of images, named sequentially (e.g., frame_0000.png, frame_0001.png, etc.) from the given video file.
Step 4: Automate Image Upload and Object Views
- Run
Auto_AI_View_Detection.py
- Output is saved as a text file in the root directory as
openai_response_date_time.txt
- Can ask follow up questions in the
Follow_Up_Questions.py
script (on line 33). The AI responses will be saved to the "Analysis_Answers" folder and if SPEAK_RESPONSE = True the answers can also be spoken back in real time.
Considerations for Dwell Time Detection
Dwell time detection involves checking if the same object is identified in consecutive frames for a minimum duration. In the above script:
- dwell_time_threshold determines how many frames constitute a "dwell time".
- When the object identified by GPT remains the same across consecutive frames and exceeds the threshold, it is counted as a successful dwell event. A possible value could be 15 frames, so if the video is 30 fps, that would be a half a second for counting a “view” or “dwell time”.
Optimizations and Challenges
- Reducing Frame Rate: Depending on the frame rate of the video, processing every single frame might be overkill. Consider sampling every n frames (e.g., every 5th frame).
- Efficient API Usage: API calls can be costly and time-consuming. Instead of uploading all frames, use conditions to narrow down which frames need detailed analysis.
Limitations and Tips
- Stand Alone applications need to using casting (make sure to press F11 to Fullscreen the browser). This then will just use head position for the gaze point and replay, also will synchronize with the physiological data if using Acqknowledge.
- Face tracking data can be found in the
data
folder and visualized using thefacial_expressions_over_time.py
script. - Gaze point position is not going to be accurate in the Z axis, as it's only hitting a collide box but not objects in the scene a user is actually looking at
- Head orientation (Meta Quest Pro) would need to be manually aligned if not using the same origin starting point
- Not able to tag regions or objects of interest (experimental option to use AI to auto-recognize objects but can't manually tag)
- For headsets that go through SteamVR (Omnicept, Vive Focus 3, Vive Pro Eye) you will not get head orientation (but will show in the replay with the virtual screen)
- Scrubbing through the Session Replay uses B and N or C and V to skip. Dragging the slider doesn't move the video
- Note: For Vive Focus Vision and Vive Focus 3, need to use the SRAnipal driver instead of OpenXR. Download here
- To verify that the eye tracker is working, recommend to run SightLab_VR.py first and then press 'p' to see your gaze point moving.
- If running with SteamVR, minimize the SteamVR window first so that doesn't show on top of the video
- Can also leverage the Plotly visualization example to view a walk path and gaze path (requires the Plotly Python library)
- The eye tracking may need to be first calibrated on the device, and then can also calibrate the replay screen by focusing on a point in the scene and moving the REPLAY_SCREEN_CALIBRATION in the Config File until the point matches up if it is off.
- Can uncomment out the specific viewpoint position values in the
Data_Recorder_Config.py
file using Shift+Alt+3 and then comment out the ones you're not using (using Alt +3). These are preset for the Meta Quest Pro, Vive Focus Vision and Desktop.
![]() |
![]() |
---|---|
![]() |
![]() |
|