- The function retrieves the eye tracker's matrix, which represents its position and orientation.
- It combines this with the main view matrix to get the gaze direction in the virtual environment.
- A line is created to represent the gaze direction.
- The viz.intersect() function checks if this line intersects with any objects in the scene.
- If there's a valid intersection, the gaze point object's position is updated to the intersection point.
Data Usage and Interpretation
The x, y, z coordinates of the intersection point represent where the user's gaze meets objects in the virtual environment.
Combined gaze data can be used for general gaze tracking, while individual eye data allows for more detailed analysis.
Eye rotation data can be used to analyze eye movements and potentially detect specific eye behaviors.
Additional Data
Additional data based on the VR system in use (e.g., pupil diameter, eye openness).
Fixation State: Indicates whether the gaze is in a fixation or saccade state.
Saccade Angle: The angle of eye movement during a saccade.
Saccade Velocity: Average and peak velocity during a saccade.
Retrieving the count of views or gaze events for each object in a scene.
Calculating the total gaze duration and the average gaze duration per object based on total gaze time divided by the number of gaze events.
Time to First Fixation: Measuring the time it takes for a participant to first fixate on a specific area of interest after a stimulus is presented.
Fixation Sequence Analysis: The order in which different areas of interest are fixated upon, which can indicate the cognitive process or strategy employed by the viewer.
Heatmaps
Scan Paths
Walk Paths
Interactive playback
Area of Interest (AOI) Analysis: Defining specific regions within the visual scene to examine how much and how long subjects look at these areas.
Gaze Contingent Display: Changing what is shown on the screen based on where the user is looking, often used in dynamic experiments.