Skip to content

HeyGen LiveAvatar Integration for AI Agent

This document explains how to use HeyGen's streaming avatars with the AI Agent system.

Overview

HeyGen LiveAvatar provides photorealistic AI avatars that can speak and respond in real-time via WebRTC streaming. This integration allows you to swap out the 3D avatar for a HeyGen streaming avatar displayed on a video screen in VR.

There are two ways to use HeyGen LiveAvatar with SightLab:
1. Browser ScreenCast Method (Simple) - Run LiveAvatar in a browser and screencast it into SightLab
2. API Integration Method (Advanced) - Direct API integration for programmatic control


This is the simplest way to use HeyGen LiveAvatar with SightLab or E-Learning Lab. You run the avatar in your browser and screencast the window into the VR environment.

Step 1: Create a LiveAvatar Account

  1. Go to LiveAvatar and sign up for an account
  2. Log in to access the LiveAvatar dashboard

Step 2: Choose or Create an Avatar

  1. Browse the available avatars in the LiveAvatar library
  2. Select an existing avatar or create a new one
  3. Customize your avatar's appearance and voice settings as desired

Step 3: Configure Your Avatar Session

You can either:
- Embed in a webpage: Get the embed code and run it on a local webpage
- Run directly from LiveAvatar: Use the LiveAvatar web interface directly

Step 4: Configure Browser Settings

Important: Turn off hardware acceleration in your browser for proper screen capture.

For Chrome:
1. Go to chrome://settings
2. Scroll to "System"
3. Toggle "Use hardware acceleration when available" OFF
4. Restart Chrome

Step 5: Select Your Microphone

Note: Using the default microphone option often doesn't work properly. You must specifically select your microphone device in the LiveAvatar settings.

  1. In the LiveAvatar interface, open audio/microphone settings
  2. Select your specific microphone device from the dropdown (do not use "Default")
  3. Test that your audio is being captured

Step 6: Run the ScreenCast Script

Note: For the E-Learning Lab, you just need to drag in the "Cast" object from the Videos tab

  1. Start the LiveAvatar session in your browser
  2. Run the SightLab screencast script:
    python HeyGen_ScreenCast.py
    
  3. A window selection dialog will appear - select "liveavatarapp" from the dropdown
  4. Put on your VR headset

Step 7: Start Your Session

Key Action
Space Start/end trial (begins recording)
m Scale video screen up
n Scale video screen down
t Print current screen position/rotation/scale
  1. Press Space to start the trial and begin recording
  2. Interact with the LiveAvatar in your browser - have a conversation
  3. Eye tracking data and Biopac data (if connected) will be collected automatically
  4. Press Space again to end the trial

Step 8: View Your Data

After the session, your data files are saved in the data/ folder:
- experiment_summary.csv - Overview of the experiment
- trial_data/ - Individual trial CSV files with eye tracking, biometric data, etc.
- replay_data/ - Replay files for playback
- recordings/ - Video recordings of the screencast

Step 9: Replay Your Session

Run the replay script to view the recorded session with synchronized data:

python HeyGen_ScreenCast_Replay.py
  1. Select the video file from the dialog (most recent files appear first)
  2. The replay will show the 3D VR scene with the video feed playing back
  3. View synchronized eye tracking visualizations and interactions

Method 2: API Integration (Advanced - Work in Progress)

For programmatic control and tighter integration, you can use the HeyGen API directly. Note, this is a work in progress, contact support@worldviz.com for details.

Resources

License

This integration is provided as-is for use with SightLab VR.
HeyGen/LiveAvatar services require a separate subscription.