SightLab 2.6 and later include built-in support for Full and Upper Body Tracked Avatars using an enhanced version of Vizard’s Inverse Kinematics (IK) system. This allows experiments to use:
Full body avatars for single-user or multi-user sessions
A wide range of tracking inputs (OpenXR passthrough body tracking, Vive Trackers, VR Headset and Controller sensors, Desktop keyboard and mouse, OptiTrack/Xsens full rigs, and more. Note there may be limitations on the Multi-User functionality with some systems).
Lip movement animations when speech is detected
Finger tracking through OpenXR, Manus gloves, Meta Quest Pro/3 controllers, or external systems (finger tracking for multi-user only available using OpenXR with controllers for finger curls, while single user allows full finger movement)
Optional mirror views for self-observation
Automatic integration with SightLab’s data logging, IK, avatar selection, and multi-user networking
1. Enabling Full Body Avatars
1.1. From the GUI (Single or Multi-User)
In both the SightLab GUI and Multi-User Client, simply:
Open the project
Go to Avatar Settings
Enable ✔ Full Body
This unlocks the list of compatible avatars stored in:
Other avatar types pending full retargeting support
Replay Mode
Not yet supported, but will show gaze position
Full body playback not yet available
Additional avatar library support is planned (Mixamo, ReadyPlayerMe, Reallusion) for all tracking methods.
For more avatars than the ones included contact support@worldviz.com or sales@worldviz.com for optional avatar packages.
Click here to see list of Complete Characters avatars available
There is planned support for additional avatar libraries to work in all tracking methods (such as Mixamo, ReadyPlayerMe, Reallusion, etc.), but for now only Complete Characters work outside of the single user OpenXR method.
For additional avatar libraries see the SightLab Asset Browser and the "Avatar Libraries" section
3. Tracking Systems
SightLab supports multiple tracking workflows, ranging from simple controller-based setups to full-body motion capture. This section details the behavior and limitations of each tracking method.
3.1 Standard VR Headset + Controllers (Head/Hands Only) (Vive Focus Vision, Quest 2, Vive Pro, Vive Pro Eye, HP Omnicept, Varjo, etc.)
This is the most common setup and the default when no body tracking hardware is present.
Avatar Behavior
Head follows the headset
Hands follow the controllers
Upper-body IK only
Torso, shoulders, elbows move via IK
Legs remain in a static standing pose (no lower-body IK)
Notes
Works in Single-User and Multi-User
Not full body tracking — upper body only
3.2 Desktop Mode (No VR Hardware)
Desktop mode simulates movement without VR equipment.
Controls:
- WASD – Move the avatar
- Mouse – Look / rotate head
- Scroll wheel – Raise/lower main hand
- Left click – Trigger interactions
Useful for development or desktop-based studies.
3.3 OpenXR Passthrough (Upper Body + Finger Curls) (Quest 3, Quest Pro — device dependent)
This is the only method that supports controller-based finger curls from Quest Pro/Quest 3 for Multi-User.
Avatar Behavior
Head follows HMD
Hands follow controllers
Upper-body IK
Legs remain in static standing pose (no lower-body IK)
Finger curl tracking works ONLY for Quest Pro / Quest 3 controllers
Finger Tracking Details
Single-User: full finger control
Multi-User: finger motion synced only via controller-based finger curls (not full hand joints)
SightLab automatically records core gaze, head, and interaction data for every session. Additional body or finger data can be logged manually by the user.
4.1 Data Logged by Default
SightLab automatically saves the following metrics regardless of hardware:
Head & Hands
Head pose (position + rotation)
Hand pose (position + rotation)
Controller button states
Gaze & Fixations
Combined and per-eye gaze intersection points
Fixations, saccades, and dwell metrics
Gaze timestamps and event markers
4.2 Optional: Logging Additional Body or Finger Data
Researchers can add custom trial data columns for any avatar bone, tracker, or finger joint using:
sightlab.setCustomTrialData(value, "ColumnName")
You may optionally log:
- Additional finger joint or curl values
- Any bone position or rotation
- Any VR device tracker (hip, foot, elbow, chest, etc.)
Finger Curls (User-Added Logging)
SightLab does not save detailed finger joint values by default. Finger curl logging can be added using the example: ExampleScripts\Hand_Tracking_Grabbing_Physics
This demonstrates:
- Reading Quest Pro / Quest 3 finger curl data
- Adding custom trial data columns
- Saving pinch, grip, and joint values
4.3 Optional: Logging Full Body Joint Positions
Users may record specific joint positions or rotations (hips, feet, elbows, etc.) using vizconnect.getTracker().
Example: Logging right-hand tracker pose every frame
importvizactrightHandTracker=vizconnect.getTracker("r_hand_tracker").getNode3d()leftHandTracker=vizconnect.getTracker("l_hand_tracker").getNode3d()deflog_right_hand():# Get position and orientationpos=rightHandTracker.getPosition()rot=rightHandTracker.getEuler()# Round for readabilitypos=[round(x,3)forxinpos]rot=[round(x,3)forxinrot]# Add custom columnssightlab.setCustomTrialData(str(pos[0]),"RHand X")sightlab.setCustomTrialData(str(pos[1]),"RHand Y")sightlab.setCustomTrialData(str(pos[2]),"RHand Z")sightlab.setCustomTrialData(str(rot[0]),"RHand Yaw")sightlab.setCustomTrialData(str(rot[1]),"RHand Pitch")sightlab.setCustomTrialData(str(rot[2]),"RHand Roll")# Update every framevizact.onupdate(0,log_right_hand)
This pattern can be used for:
- Hip, foot, chest, or elbow trackers
- Wrist-mounted markers
- Glove-based systems
- Any avatar bone via avatar.getBone()
- Any custom transform in the scene
4.4 Replay Mode Limitations
Replay mode currently supports:
- Head pose
- Gaze and fixations
- Interaction events
Replay does not yet support:
- Full-body playback
- Finger tracking playback
4.5 Full List of Saved Metrics
For a complete list of all default metrics (fixations, saccades, dwell time, object interactions, etc.), see:
🔗 Common Metrics in SightLab
https://help.worldviz.com/sightlab/common-metrics/
5. Motion Capture for Avatar Animations
Recorded motion capture animations can be used for avatar playback in SightLab. There are many options for creating animations, from AI-based video conversion to high-end motion capture systems. Animations can be exported via MotionBuilder or other tools and applied to avatars.
AI-Based Animation from Video
These tools convert standard video (captured with a phone or camera) into animations using AI algorithms:
since the full body avatars are not using the same avatar for the default SightLab you can add this code to your replay to see a default avatar head to show where a user had moved (the full body avatars do not currently show in the replay and without this code you would just see the gaze ray and gaze point)
importvizimportvizfxfromsightlab_utilsimportreplayasReplayfromsightlab_utils.replay_settingsimport*replay=Replay.SightLabReplay()#where '1' is the first client, '2', second, etc. avatarHead=replay.sceneObjects[AVATAR_HEAD_OBJECT]['1']glowbot=vizfx.addChild(r'C:\Program Files\WorldViz\Vizard8\bin\lib\site-packages\sightlab_utils\resources\avatar\head\GlowManBlue.osgb')defreplaceAvatarHead():viz.link(avatarHead,glowbot)importvizactvizact.onupdate(0,replaceAvatarHead)