Skip to content

Multi-User SightLab Overview

Multi-User SightLab is a multi-user VR platform for collaborative research, training, teaching, and immersive presentations. It combines shared virtual environments, presenter/researcher-led or participant-driven sessions, integrated analytics, replay tools, and AI-powered assistance in one system. The E-Learning Lab / VR Presentation Tool extends this with no-code lesson creation, immersive classroom workflows, AI lesson generation, quizzes, and presentation delivery for education and training.

What You Get

Shared Multi-User VR Sessions

  • Run live multi-user sessions locally or remotely in the same virtual environment
  • Support server/client session setup for instructors, facilitators, researchers, or distributed teams
  • Users can join as named participants with avatars and controller representations (including full body avatars)
  • Clients can see one another in the session, while a server/instructor can monitor the full experience from an overview perspective

Flexible Content Creation

  • Build experiences with little or no coding using GUI-driven workflows
  • Create custom applications with Python when advanced behavior is needed
  • Use 3D models, 360 media, videos, images, PDFs, slideshows, screencasts, and interactive assets
  • Includes example scripts and templates for presentations, multi-user avatars, AI agents, virtual screens, gaze interactions, quizzes, and more

Supervisor, Controller, Instructor, Presenter, and Classroom Modes

  • Run sessions in single-user, multi-user, supervisor, controller, instructor, or presenter modes
  • Guide learners or participants through synchronized experiences
  • Support projection-based classrooms and observer-style workflows
  • Presentation-mode examples are included for controlling flow across connected users

AI-Assisted Interactions

  • Add conversational AI agents to role-play, guide users, answer questions, explain content, and support interactive learning
  • Use AI models from OpenAI, Anthropic, Gemini, and offline Ollama-supported models
  • Support voice interaction, text input, text-to-speech, and multimodal/vision-enabled workflows depending on the model configuration
  • Multi-agent and avatar-based AI interactions are supported

Research-Grade Analytics and Replay

  • Collect behavioral and interaction data during sessions
  • Track gaze intersections, eye rotation, head position, fixations, saccades, dwell time, view counts, timelines, and custom trial data
  • Generate replay files and review sessions with heatmaps, scan paths, dwell patterns, interactions, and other visual analytics
  • Filter replay and visualizations by individual client in multi-user sessions

Hardware Cross-Compatibility

  • Support mixed hardware deployments within the same ecosystem, including different VR headsets and projection-based systems
  • Enable participants using different compatible devices to join the same collaborative session
  • Combine immersive displays with supported peripherals and research hardware such as eye trackers, data gloves, and BIOPAC physiological systems
  • Ideal for classrooms, labs, and research environments where users may need different hardware roles in one shared experience

Eye Tracking, Biometrics, and Advanced Hardware Support

  • Supports eye-tracked and non-eye-tracked headsets, with head-position analytics available where eye tracking is not present
  • Integrates with supported VR/XR hardware and can work with additional physiological and biometric systems
  • Can synchronize with BIOPAC and other external data workflows where needed

Collaboration and Interaction Tools

  • Use avatars, hand/controller models, object grabbing, gaze-enabled objects, virtual screens, drawing tools, and other interaction methods
  • Support presentation content, collaborative review, and hands-on group activities
  • Can be used for education, team-based training, design review, simulation, and experimental research

E-Learning Lab Highlights

The E-Learning Lab / VR Presentation Tool builds on Multi-User SightLab for immersive education and training.

Designed for Teaching and Training

  • Create immersive VR presentations without programming
  • Organize presentations, lessons, scenes, and media inside a GUI-based authoring environment
  • Add slideshows, PDFs, 3D models, quizzes, videos, audio, 360 media, and screencasts
  • Launch experiences in single-user, instructor, or client modes

AI-Powered Lesson Creation

  • Generate VR lesson content from prompts or existing materials
  • Add AI teaching assistants that respond to student questions and explain scene content
  • Support contextual AI interactions, voice workflows, and multilingual possibilities depending on configuration

Learning Analytics

  • Measure engagement and attention through eye tracking and behavioral data collection
  • Review sessions through replay and visual analytics
  • Combine immersive teaching with measurable learner interaction data

Ideal Use Cases

  • VR classrooms and immersive labs
  • Higher education and university libraries
  • Instructor-led training
  • Collaborative research studies
  • Team-based simulation and remote participation
  • Presentation delivery in VR
  • Behavioral and eye-tracking research
  • Interactive learning with AI agents

Summary

Multi-User SightLab combines shared VR experiences, content authoring, AI assistance, and research-grade analytics in one platform. The E-Learning Lab adds a more education-focused layer for immersive lessons, presentation delivery, classroom control, and learning analytics, making it suitable for both enterprise/R&D collaboration and teaching-focused VR deployments.