Skip to content

Features capabilities

1. SightLab VR Pro — Core Features and Capabilities

SightLab VR Pro is a turnkey virtual reality experiment generator built on top of the Vizard engine, providing both GUI-based and code-based workflows.

⚙️ Experiment Design and Setup

  • Dual Workflow: Use Graphical Interface or Python scripting.
  • Add and manage 3D environments, objects, regions of interest (ROIs), and 360° media.
  • Configure experiment conditions, trial count, timing, randomization, and independent variables.
  • Supports manual or automatic trial triggers (keypress, timer, event).
  • Create experiments from templates (100+ prebuilt examples).

🧩 Data Collection

SightLab integrates with a broad range of sensors and writes synchronized, timestamped data to CSV or JSON.

Eye & Gaze Tracking:

  • Fixations, saccades, pupil diameter, dwell time, and gaze intersection data.
  • Real-time combined eye vector and per-eye coordinates.
  • Fixation and saccade metrics: amplitude, velocity, peak velocity.
  • Dynamic ROIs and gaze-based interaction triggers.
  • Full integration with Tobii, Varjo, Vive Pro Eye, Omnicept, and more.

Physiological & Biometric Data:

  • Real-time EEG, ECG, EDA, fNIRS synchronization.
  • BIOPAC AcqKnowledge event markers.
  • Lab Streaming Layer (LSL) support for other devices.
  • Biofeedback mode for live physiological influence.

Body and Facial Tracking:

  • Face tracking (up to 60 points)
  • Hand and upper-body tracking
  • Full-body avatars (OptiTrack, Vicon, XSens, Vive Trackers, Mocopi)

🎮 Interactivity and User Experience

  • Gaze-based interactions
  • Grabbable objects (grab=True)
  • Proximity detection using vizproximity
  • Add instructions, rating scales, or questionnaires within VR
  • In-experiment 2D virtual screens for showing media or stimuli
  • Mixed Reality passthrough for MR/AR research
  • Multi-user synchronized sessions (client/instructor modes)

🧱 Environment and Asset Management

  • Built-in 3D Model Inspector for tagging ROIs, adjusting lighting, and setting spawn points.
  • Supports import of .osgb, .fbx, .obj and 360° images/videos (mono/stereo).
  • Asset sourcing from Sketchfab, Blender, 3ds Max, Unity Asset Store, and AI-generated 3D.
  • Built-in environments (e.g. homeOffice.osgb, lecture.osgb, van_gogh_room.osgb).

📊 Replay and Analysis Tools

  • Replay complete VR sessions with gaze overlays and synchronized physiological data.
  • Visualizations:
  • 3D Gaze Paths
  • Fixation and Dwell Spheres
  • Gaze Rays and Intersection Points
  • Heatmaps and Scanpaths
  • Avatar Interactions
  • Export statistics per object:
  • Views per object
  • Total/average dwell time
  • Timeline data
  • Session video recording for external review.

🧠 AI-Enabled Capabilities

(from ai-intelligent-agent.md and AI Data Analysis examples)

  • Integrate AI-powered agents (GPT-4, Claude, Gemini, Ollama, etc.)
  • Customizable personalities, speech recognition, TTS (OpenAI, ElevenLabs).
  • AI-guided adaptive learning, dynamic scene behavior, or intelligent tutors.
  • AI-assisted data analysis and experiment generation.

🧰 Built-in Tools and Extensions

  • Python code editor within GUI with package manager (NumPy, Pandas, Scikit-learn).
  • Custom event triggers and flagging system.
  • Automatic data folder and trial summary creation.
  • sightlab.hud.getCurrentTime() for timestamps.
  • Extend with any Python library — e.g., psychopy, scipy, tensorflow.

💻 Hardware Support

  • VR Headsets: Meta Quest Pro, Quest 3/3S, Vive Pro Eye, Vive Focus 3 & Vision, Varjo XR-3/4, HP Omnicept, StarVR, SteamVR headsets.
  • Eye Trackers: Tobii, Pupil Labs, Varjo, Omnicept.
  • Body Tracking: OptiTrack, Vicon, XSens.
  • Physio Systems: BIOPAC, BrainVision, ANT Neuro, Shimmer, etc.
  • Input Devices: Controllers, data gloves, haptics, driving wheels, etc.
  • Projection and Cave systems supported via Vizard.

🧪 2. Example Experiment Templates

See here for full list

  • Visual Search Task
  • Phobia/Stimulus Presentation
  • Driving Simulator
  • Smooth Pursuit Eye Tracking
  • Face and Body Tracking
  • Biofeedback and Adaptive Learning
  • AI Agent Interaction
  • External Application Data Recorder
  • Multi-User VR Training
  • Image Viewer / Rating Task
  • Memory Task / Cognitive Load Study
  • Virtual 2D Screen / MR Environments
  • Drawing in 3D / Object Manipulation

Each can be modified or extended using the built-in trial system.


🎓 3. E-Learning Lab (Multi-User Education Platform)

An extension of SightLab optimized for education and collaborative training.

📚 Key Capabilities

  • Multi-User VR: Local and remote connectivity for shared learning.
  • Instructor, Student, Observer Modes.
  • No coding required – drag-and-drop GUI.
  • Prebuilt Lessons: Anatomy, Astronomy, Anthropology.
  • Asset Browser and included 3D learning assets.
  • AI Teaching Assistants for real-time tutoring.
  • Screencasting and desktop sharing in VR.
  • Student quizzes (auto-generated from lesson content).
  • AR passthrough and mixed reality support.
  • BIOPAC and eye-tracking synchronization for attention metrics.
  • Lesson creation and export for distribution or classroom synchronization.
  • Custom avatars and voice support.

  • 🎬 Selected Examples

### 🚗 Driving Simulator

  • Fully featured driving setup using steering wheel or VR controllers.
  • Includes eye tracking for road, mirror, and hazard attention mapping.
  • Logs speed, lane deviation, gaze dwell, and reaction events.
  • Compatible with Vive Pro Eye, Varjo, Omnicept, and OpenXR setups.
  • Ideal for studies on attention, stress, and driver behavior.

## 🧩 Virtual 2D Screen

  • Add any 2D surface in VR for video or image stimulus.
  • Supports mono and stereo formats (Top/Bottom, Left/Right).
  • Link playback controls to experiment conditions.
  • Combine with stimulus presentation or mixed reality tasks.

## 🌐 External Application Data Recorder

  • Records eye, face, and physiological data from any external app (e.g. SteamVR, WebVR, Unreal, Unity).
  • Synchronize gaze and head data across external VR software.
  • Generate replayable logs compatible with SightLab’s visualization tools.

🧰 Vizard Engine Core Features

  • High-performance 3D rendering engine (OpenGL-based)
  • Full Python API for VR development
  • Supports:
  • GUI panels (vizinfo, vizdlg, vizconfig)
  • 3D object manipulation
  • Physics simulation
  • Networking and multi-user sync
  • HTML interfaces and web data integration
  • Real-time lighting and shaders
  • Integrated package manager, plugin system, and vizconnect device configurator.
Was this page helpful?