Skip to content

Eye Tracking Data

Overview

SightLab exports eye tracking data including:

  • The x, y, z coordinates of the gaze intersection point (combined and per eye)
  • Rotational values for each eye (left and right)

Coordinate System

Intersect Point Coordinates

  • System: 3D Cartesian coordinate system (X, Y, Z)
  • Origin: Center of the virtual environment
  • Axes:
    • Positive X: Extends to the right
    • Positive Y: Extends upward
    • Positive Z: Extends outward from the screen
  • Units: Meters
  • Reference: Coordinates are relative to the parent object's coordinate system

Eye Rotation

  • Represents the rotational orientation of each eye
  • Measured in degrees
  • Represented as Euler angles

Data Types

Gaze Intersection Data

  • Combined gaze: x, y, z coordinates of where the combined gaze intersects with objects in the scene
  • Individual eye gaze: x, y, z coordinates for each eye's gaze intersection

Eye Rotation Data

  • Left eye rotation: Rotational values for the left eye in Euler angles
  • Right eye rotation: Rotational values for the right eye in Euler angles

Data Collection Process

The eye tracking data is collected and processed in the updateGazeObject function within the "sightlab" module. Here's a breakdown of the process:

  1. Retrieve the eye tracker's transformation matrix
  2. Combine it with the main view matrix to get the gaze direction in the virtual environment
  3. Create a line representing the gaze direction
  4. Check for intersection with objects in the scene
  5. Update the gaze point object's position if there's a valid intersection
  6. Record the time, intersection point, main view position, and other relevant flags

Code Explanation

def updateGazeObject(self):
    self.eyeTracker = vizconnect.getTracker('eye_tracker').getRaw()
    gazeMat = self.eyeTracker.getMatrix()
    gazeMat.postMult(viz.MainView.getMatrix())
    line = gazeMat.getLineForward(1000)
    info = viz.intersect(line.begin, line.end)

    if info.valid and self.gazePointObject is not None:
        self.gazePointObject.setPosition(info.point)
- The function retrieves the eye tracker's matrix, which represents its position and orientation. - It combines this with the main view matrix to get the gaze direction in the virtual environment. - A line is created to represent the gaze direction. - The viz.intersect() function checks if this line intersects with any objects in the scene. - If there's a valid intersection, the gaze point object's position is updated to the intersection point.

Data Usage and Interpretation

  • The x, y, z coordinates of the intersection point represent where the user's gaze meets objects in the virtual environment.
  • Combined gaze data can be used for general gaze tracking, while individual eye data allows for more detailed analysis.
  • Eye rotation data can be used to analyze eye movements and potentially detect specific eye behaviors.

Additional Data

  • Additional data based on the VR system in use (e.g., pupil diameter, eye openness).
  • Fixation State: Indicates whether the gaze is in a fixation or saccade state.
  • Saccade Angle: The angle of eye movement during a saccade.
  • Saccade Velocity: Average and peak velocity during a saccade.
  • Retrieving the count of views or gaze events for each object in a scene.
  • Calculating the total gaze duration and the average gaze duration per object based on total gaze time divided by the number of gaze events.
  • Time to First Fixation: Measuring the time it takes for a participant to first fixate on a specific area of interest after a stimulus is presented.
  • Fixation Sequence Analysis: The order in which different areas of interest are fixated upon, which can indicate the cognitive process or strategy employed by the viewer.
  • Heatmaps
  • Scan Paths
  • Walk Paths
  • Interactive playback
  • Area of Interest (AOI) Analysis: Defining specific regions within the visual scene to examine how much and how long subjects look at these areas.
  • Gaze Contingent Display: Changing what is shown on the screen based on where the user is looking, often used in dynamic experiments.

For more information see the eye tracking metrics page