Skip to content

SightLab Features Missing From (or Extending Beyond) OEP Profiles

A consolidated list + short blurbs for each (complete + ready for meeting use)


Combined SightLab vs OEP Feature Summary

Category / Feature Area Supported in SightLab In OEP Profiles? Notes
Object interaction / grabbing No OEP object-action or manipulation system
Hand tracking / gesture-based grabbing Engine-level feature (OpenXR hands, gloves), not modeled in OEP
Physics & collisions / rigid bodies OEP defines no physics model; teleport only appears in later profiles
Teleportation / guided locomotion ✖ (v2+) SightLab supports arc/jump/transport; OEP-v1 explicitly excludes it
Foot & full-body tracking / IK avatars OEP only defines head pose; no body skeleton or IK
Avatar animation & NPC behaviors No avatar or animation representation in OEP
AI agents & generative features (LLM, speech, avatar control) Completely outside OEP’s experiment model
Per-object dwell/fixation/saccade analytics OEP only supports raw gaze vectors; no AOI/object analytics
Advanced eye metrics (fixations, saccades, pupil, TTFF, etc.) No analytical eye-tracking metrics in OEP
Screen-based eye tracking (Tobii, EyeLogic) OEP does not support desktop / monitor-mounted trackers
Mixed Reality support (passthrough, real-object ROI) OEP has no MR mode or passthrough concepts
Mixed Reality eye tracking (real-world gaze intersection) Not included in OEP; SightLab supports MR eye-tracking templates
Dynamic / moving ROIs (360° AOIs) △ (future) SightLab far exceeds current OEP profiling
360° video, spatial video, 180 SBS media △ (basic assets only) OEP doesn’t define video-specific ROI tools
Image viewer & media presentation tools OEP defines assets, not full image/video viewer workflows
Virtual screens / screencasting No OEP concept for external-app display or virtual monitor
Session Replay system (heatmaps, scanpaths, timelines) OEP provides no replay/log playback pipeline
Replay-linked analytics & Biopac sync Replay can drive external timelines (e.g., AcqKnowledge)
External Application Recorder Unique SightLab feature; records gaze/video over external apps
Multi-user VR (avatars, gaze, sync) OEP networking is stubbed/non-functional
Presenter Mode / instructor tools OEP has no instructor or remote-control flow
VR menus & tablet UI systems No UI/menu abstraction layer in OEP
Instruction system, rating scales & survey UI OEP does not define instructional systems or survey UI widgets
LSL physiology (EEG/EDA/ECG sync) OEP physiology is v2+ only
Biopac / NDT integration Not in OEP v1
Driving simulator / vehicle physics OEP does not support continuous control or vehicle physics tasks
3D drawing tools / annotation No drawing/annotation tools in OEP
Measuring tools (VR measuring tape) Not represented in OEP feature profiles
Environment controls (lighting, environment switching) OEP defines static assets only
Audio recording/transcripts SightLab fully supports; OEP does not
Regenerate data with new algorithms OEP has no concept of data regeneration
Scene authoring tools (Inspector, GUI, drag-and-drop) OEP is not an authoring environment
Face Tracking (expressions, visemes, avatar blendshapes) Full Meta & HTC facial tracking; not in OEP
Publishing as executable Engine-level deployment, not part of OEP

1. Advanced Interaction & Object Systems

1.1 Object Interaction & Physics-Based Grabbing

SightLab supports grab, drop, physics interactions, distance grabbing, hand tracking grabs and logging of grab/release events.
OEP does not define any object-action or interaction model.
Includes physical objects, rigid-body collision, selector tool interactions.
🗂️ Refs: hand tracking & grabbing, collisions & gravity, selector tool

1.2 Highlighter / Selector Tool

A VR/desktop object targeting tool that rays out, highlights items, and triggers events. Supports multi-user sync.
OEP has no concept of selector/highlighter actions.
🗂️ Refs: selector tool

1.3 Measuring Tape Tool

Precise measurement of real or virtual distances using controller or mouse inputs—used in VR, AR, and Mixed Reality calibration.
No equivalent in OEP.
🗂️ Refs: measuring tape

1.4 Complex Scene & Environment Tools

Includes scene swapping, per-trial environment changes, object placement tools, object visibility control, environment lighting, walk path creation, ramps/terrain physics.
OEP does not define an environment API.


2. Eye Tracking, Fixations & Attention Analytics

2.1 Per-Object Gaze, Dwell, Fixation, Saccade Analytics

SightLab’s strongest feature set:

  • fixations (centroid-based)
  • saccades (velocity, amplitude, delta-gaze)
  • dwell time
  • view/dwell visits
  • time to first fixation
  • full fixation–saccade timeline export
    OEP profiles only define raw intersection and possibly gaze triggers in future versions, but not analytics.

2.2 Individual-Eye Data & Gaze Rays

Left/right eye intersections, combined gaze, per-eye rotations, gaze rays rendered in replay.
OEP defines no such metrics.

2.3 Advanced Noise Reduction & Eye-Tracking Filters

Includes noise filtering, dispersion thresholds, smoothing, and regeneration with updated algorithms.
🗂️ Refs: regenerate data

2.4 360° & Moving ROIs

Dynamic regions for spherical media, scaling per axis, moving ROIs, multiple ROIs.
OEP only mentions ROI in a “future” optional tag.

2.5 Screen-Based Eye Tracker Support (Desktop Eyetrackers)

Native templates for Tobii, EyeLogic, etc.
OEP does not cover non-HMD trackers.
🗂️ Refs: screen-based eye trackers


3. Mixed Reality Features (XR/AR)

3.1 MR Eye Tracking on Real-World Objects

SightLab allows identifying real-world surfaces/objects and computing gaze intersections on them in passthrough XR.
OEP has no MR specification.
🗂️ Refs: mixed-reality & MR eye tracking

3.2 MR Object Region Placement Tools

Drag-to-place calibration regions, automatic naming overlays, passthrough replay compatibility.

3.3 Real-World Object Overlays

Blend virtual overlays with real objects, assign ROI labels, tag objects for data logging.


4. Advanced Experiment Design (Beyond OEP node graph)

4.1 Instruction System (Images, Text, Trial-Level Timing)

Prompting participants with instructions, fade-quads, rating-scale integration, image instructions.
OEP defines no participant instruction presentation.

4.2 Rating Scale & Survey System

Built-in rating panels, GUI-driven survey questions, CSV-based survey generation.
Not represented in OEP.

4.3 External STIM File Engine

Automatic experiment condition control via spreadsheets or CSV-based “stim files.”
OEP provides conditions but not a full STIM loader.

4.4 Timer-Based & Event-Based Flow Control

Event callbacks, timers, keypress triggers, multi-user trial sync.
OEP runtime offers simpler control.

4.5 Jump-To Locations / Navigation Menu

Teleport shortcuts, arc teleport, guided navigation.
OEP explicitly lists teleport as ✖ / v2+.


5. Data Logging & Replay (Extremely Expanded Compared to OEP)

5.1 Session Replay System

Full replay of VR sessions including:

  • gaze point trajectories
  • heatmaps
  • scan paths
  • avatars
  • ROIs
  • 360 media
    OEP has no replay specification.

5.2 Replay Synchronization with Biopac AcqKnowledge

Replay can drive the Biopac graph timeline.
OEP does not define physiological sync.

5.3 Regenerate Past Data Under New Algorithms

Recompute fixations, saccades, noise thresholds—unique to SightLab.
🗂️ Refs: regenerate data

5.4 External Application Data Recorder

Record gaze + video + triggers from non-VR apps, with:

  • video capture
  • region tagging
  • AI analysis of sessions
  • external app synchronization
    OEP defines no external recorder.

6. Multi-User / Networking System (Far beyond OEP)

6.1 Full Multi-User Avatar Synchronization

Supports:

  • full-body avatars
  • face tracking
  • hand tracking
  • voice
  • audio transcripts
  • synchronized gaze rays
    OEP networking marked ✖ / stub.

6.2 Multi-User Presenter Mode

Presenter can control other clients:

  • trial navigation
  • teleporting users
  • highlighting
  • resetting users
  • 3D menu controller
    🗂️ Refs: presentation mode
    OEP provides no comparable runtime.

6.3 Synced State Sharing

Shared variables, synced flags, synchronized teaching tools.
Absent in OEP.


7. Avatar, Body, Gesture & Animation Systems

7.1 Full Body Tracking & Avatars

Trackers, Mocopi, Vive trackers, procedural animation, mapped facial expressions.
OEP has no avatar model.

7.2 Face Tracking

Quest Pro, Vive Focus Vision face trackers, mapped to avatars with replay support.

7.3 Hand Tracking for Grabbing & Gesture Control

Gesture recognition, grabbing with physics, interaction with scene objects.
Much richer than OEP's simple “controller pose”.

7.4 Scripted Avatar Agents

Avatars with speech, animations, gestures triggered by STIM or event systems.


8. AI & Intelligent Agents

8.1 AI Agent Framework (ChatGPT, Claude, Gemini, Ollama offline models)

Supports:

  • speech recognition
  • TTS (ElevenLabs, Piper)
  • avatar-driven conversational AI
  • multi-agent interaction
    No OEP support for AI.

8.2 AI Vision for Scene Understanding

Agents can “see” the 3D scene or screenshots and respond.
Unique to SightLab.

8.3 AI 3D Model, Image Generation & Spawning

Generate new stimuli dynamically.


9. Audio, Video & Media Systems

9.1 Audio Recording & Transcription

SightLab already captures audio + transcript.
OEP lists audio recording as ✖ (unsupported).
➡️ Important: This is NOT removed from SightLab; OEP simply doesn’t support it.

9.2 360 Video, 180 SBS, Spatial 3D Video

Full playback + ROIs + analytics.
OEP mentions only 2D assets.

9.3 Virtual 2D Screens & Screencasting

Display desktop/mobile content in VR and record gaze over screens.
OEP has no screen model.


10. Publishing, Tools & Utility Systems

10.1 Publish as Executable

Tools for bundling VR experiments as standalone EXEs, including Replay EXEs.
🗂️ Refs: publishing executables

10.2 Asset Browser & Scene Inspector Tools

Import assets, swap models, drag-and-drop object inspector.

10.3 Digital Twin / Real-Time Data Visualization

Map real-time data streams into 3D scenes.

10.4 Multi-Device Support (XR, desktop, eyetrackers, trackers)

Far beyond the “single profile” requirements in OEP.

High-Level Interpretation: What This Means For SightLab 3.0

NO features need to be removed.

OEP is not a limit—it is a baseline format.
SightLab implements Superset Functionality relative to OEP.

SightLab 3.0 will simply *export* a minimal compatible OEP layer.

Your advanced features remain SightLab-native.

Your job is identifying which SightLab features are NOT represented in OEP so they can be added as:

  1. Extensions
  2. Metadata blocks
  3. Runtime capability declarations

The list above is the complete set for that.