HeyGen LiveAvatar Integration for AI Agent
This document explains how to use HeyGen's streaming avatars with the AI Agent system.
Overview
HeyGen LiveAvatar provides photorealistic AI avatars that can speak and respond in real-time via WebRTC streaming. This integration allows you to swap out the 3D avatar for a HeyGen streaming avatar displayed on a video screen in VR.
There are two ways to use HeyGen LiveAvatar with SightLab:
1. Browser ScreenCast Method (Simple) - Run LiveAvatar in a browser and screencast it into SightLab
2. API Integration Method (Advanced) - Direct API integration for programmatic control
Method 1: Browser ScreenCast (Recommended for Quick Setup)
This is the simplest way to use HeyGen LiveAvatar with SightLab or E-Learning Lab. You run the avatar in your browser and screencast the window into the VR environment.
Step 1: Create a LiveAvatar Account
- Go to LiveAvatar and sign up for an account
- Log in to access the LiveAvatar dashboard
Step 2: Choose or Create an Avatar
- Browse the available avatars in the LiveAvatar library
- Select an existing avatar or create a new one
- Customize your avatar's appearance and voice settings as desired
Step 3: Configure Your Avatar Session
You can either:
- Embed in a webpage: Get the embed code and run it on a local webpage
- Run directly from LiveAvatar: Use the LiveAvatar web interface directly
Step 4: Configure Browser Settings
Important: Turn off hardware acceleration in your browser for proper screen capture.
For Chrome:
1. Go to chrome://settings
2. Scroll to "System"
3. Toggle "Use hardware acceleration when available" OFF
4. Restart Chrome
Step 5: Select Your Microphone
Note: Using the default microphone option often doesn't work properly. You must specifically select your microphone device in the LiveAvatar settings.
- In the LiveAvatar interface, open audio/microphone settings
- Select your specific microphone device from the dropdown (do not use "Default")
- Test that your audio is being captured
Step 6: Run the ScreenCast Script
Note: For the E-Learning Lab, you just need to drag in the "Cast" object from the Videos tab
- Start the LiveAvatar session in your browser
- Run the SightLab screencast script:
python HeyGen_ScreenCast.py - A window selection dialog will appear - select "liveavatarapp" from the dropdown
- Put on your VR headset
Step 7: Start Your Session
| Key | Action |
|---|---|
Space |
Start/end trial (begins recording) |
m |
Scale video screen up |
n |
Scale video screen down |
t |
Print current screen position/rotation/scale |
- Press
Spaceto start the trial and begin recording - Interact with the LiveAvatar in your browser - have a conversation
- Eye tracking data and Biopac data (if connected) will be collected automatically
- Press
Spaceagain to end the trial
Step 8: View Your Data
After the session, your data files are saved in the data/ folder:
- experiment_summary.csv - Overview of the experiment
- trial_data/ - Individual trial CSV files with eye tracking, biometric data, etc.
- replay_data/ - Replay files for playback
- recordings/ - Video recordings of the screencast
Step 9: Replay Your Session
Run the replay script to view the recorded session with synchronized data:
python HeyGen_ScreenCast_Replay.py
- Select the video file from the dialog (most recent files appear first)
- The replay will show the 3D VR scene with the video feed playing back
- View synchronized eye tracking visualizations and interactions
Method 2: API Integration (Advanced - Work in Progress)
For programmatic control and tighter integration, you can use the HeyGen API directly. Note, this is a work in progress, contact support@worldviz.com for details.
Resources
- LiveAvatar Documentation
- LiveAvatar Quick Start
- FULL Mode Events
- LiveKit Python SDK
- HeyGen Community
License
This integration is provided as-is for use with SightLab VR.
HeyGen/LiveAvatar services require a separate subscription.