This document describes how to use the AI Agent in SightLab, an interactive, intelligent AI agent that can be connected to various large language models like Chat GPT, Claude, Gemini, Offline Ollama models and more. You can customize the agent's personality, use speech recognition, and leverage high-quality text-to-speech models.
Note: For compatibility you may need to use the latest config files (delete any existing ones in the configs folder). Download latest here. If added custom configs, can keep those but may need to add FACE_VIEW_OFFSET = 0.2 to options.
Key Features
Interact and converse with custom AI Large Language Models in real-time VR or XR simulations.
Choose from OpenAI models, Anthropic models, Gemini models, including vision capabilities as well as hundreds of offline models via Ollama (such as deepseek, Gemma, Llama Mistral and more).
Modify avatar appearance, animations, environment, and more. Works with most avatar libraries (Avaturn, ReadyPlayerMe, Mixamo, Rocketbox, Reallusion, etc.).
Customize the agent's personality, contextual awareness, emotional state, interactions, and more. Save your creations as custom agents.
Use speech recognition to converse using your voice or text-based input.
Choose from high-quality voices from Open AI TTS or Eleven Labs (requires API).
Train the agent as it adapts using conversation history and interactions.
Works with all features of SightLab, including data collection, visualizations, and transcript saving.
Easily add to any SightLab script.
Interactive Events- New feature as of SightLab 2.5.10 - Avatars can now trigger custom events to be even more interactive such as responding with appropriate facial expressions, using animations, interacting in the scene based on the conversation context
Support for many languages
Instructions
1. Installation
Ensure you have the required libraries installed using the Vizard Package Manager. These include:
openai (for OpenAI GPT agents)
anthropic (for Anthropic Claude agent)
elevenlabs (for ElevenLabs text-to-speech- Note: this seems to have slightly faster responses)
google for Gemini
google-generativeai for Gemini
ollama For offline models via Ollama
SpeechRecognition
sounddevice (pyaudio for older versions of SightLab)
faster_whisper
numpy
pyttsx3 for using Microsoft's offline text to speech engine
piper for the offline piper voice library (piper voice library samples here)
Note: Piper has the least latency (often times around 1 sec. or less, but can vary between 1-4 seconds, where the other models have higher quality but may take 3-5 seconds from when you speak to when they speak back.)
Piper can start generating audio as soon as the first phonemes are processed, often outputting speech within 100–300 ms of receiving text.
For ElevenLabs, you may need to install ffmpeg. Download ffmpeg here.
Go to "Release builds" section and download ffmpeg-release-full.7z. Extract this folder and copy address to where the "bin" folder exists. Paste this address into ffmpeg_path.txt in the keys folder.
Mpv Player: For ElevenLabs Install mpv and add it to the environment variable path:
Unzip the mpv folder and move it to C:\Program Files\
In Windows search or start type powershell, run as administrator (right click)
type this command: setx /M PATH "$($env:PATH);C:\Program Files\mpv-x86_64-20250812-git-211c9cb"
Restart Vizard
For offline models install Ollama from here, then after installation open a command line prompt (type cmd into Windows Search) and type ollama run followed by the name of the llm you wish to pull (i.e. ollama run gemma 7b). For a list of models see this page. It may take a little longer the first time you run Ollama as the model may need to time to load (subsequent calls should be quicker).
Note: Requires an active internet connection if not running offline models.
2. API Keys
Obtain API keys from OpenAI, Anthropic, and ElevenLabs (if using ElevenLabs TTS) (see below).
New Method (as of SightLab 2.3.7)
In windows search type "cmd" enter setx OPENAI_API_KEY your-api-key, setx GEMINI_API_KEY your-api-key,
With this method you don't need to keep the keys in a folder in your project and your api keys can be accessed from any folder.
For offline models via Ollama no key is needed.
3. Running the Script
Run AI_Agent.py to start the AI Agent or AI_Agent_GUI.py to run with the GUI. multi_agent_interaction.py will run a sample multi-user interaction.
4. Interaction
Hold the 'c' key or RH grip button to speak; release to let the agent respond.
If USE_SPEECH_RECOGNITION is False, press 'c' to use type question.
To stop the conversation, type "q" and click "OK" in text chat
If HOLD_KEY_TO_SPEAK is False, then you only need to speak and when there is a pause of over 0.8 seconds, the agent will respond, then wait for you to speak again. Requires wearing headphones.
Press 'h' to send a screenshot as a prompt and ask questions about what the AI agent is seeing (in the sub-window)
5. Configuration
Open Config_Global.py (or AI_Agent_Config.py (in the configs folder for older versions)) and configure the options. Here are a few ones you may want to modify:
AI_MODEL: Choose between 'CHAT_GPT' ,'CLAUDE'and Gemini.
OPENAI_MODEL: Specify the OpenAI model name (e.g., "gpt-4o"). List of models
ANTHROPIC_MODEL: Specify the Anthropic model name (e.g., "claude-3-5-sonnet-20240620"). List of models
GEMINI_MODEL: Specify the Gemini model
OFFLINE_MODEL: Specify offline model from list of models supported via Ollama (deepseek-r1, gemma3, llama3.3, etc.)
MAX_TOKENS: Adjust token usage per exchange (e.g., GPT-4 has 8192 tokens).
USE_SPEECH_RECOGNITION: Toggle speech recognition vs. text-based interaction.
SPEECH_MODEL: Choose between Open AI TTS, Eleven Labs or Pytssx3.
HOLD_KEY_TO_SPEAK: Set to True to hold the C key or RH grip to speak, otherwise waits for silence.
USE_GUI: Enable GUI for selecting environments and options.
ELEVEN_LABS_VOICE: Choose Eleven Lab voice. See samples on the elevenlabs website
Additional options include setting avatar properties, environmental models, and GUI settings and more (see Config_Global.py script.
AI Agent Event System
As of SightLab 2.5.10 the AI agent now supports event-driven interactions, allowing the AI to trigger custom actions during conversations for more expressive and interactive experiences.
The event system allows the AI to execute custom actions (like facial expressions, animations, or any other callback) by including special "event" keywords in its responses. These events are automatically detected, executed, and removed from the text shown to the user.
How It Works
Event Detection: The AI includes event: <event_name> on a line in its response
Event Execution: The system detects this line, triggers the corresponding handler
Text Cleaning: The event line is removed before displaying text to the user
Action: The custom action (e.g., facial expression, processing a screenshot, etc.) is performed
Configuration
Global Settings (Config_Global.py)
# Event System SettingsUSE_EVENT_SYSTEM=True# Enable/disable event systemEVENT_KEYWORD="event:"# Keyword that triggers events# Morph Target Indices for Facial ExpressionsSMILE_MORPH_ID=3# Avatar-specific morph index for smileSAD_MORPH_ID=2# Avatar-specific morph index for sadEXPRESSION_MORPH_AMOUNT=0.7# Intensity (0.0 to 1.0)EXPRESSION_DURATION=1.2# Duration in seconds
Each avatar config can override the default morph indices:
# Event System - Facial Expression Morph TargetsSMILE_MORPH_ID=3# RocketBox smile morphSAD_MORPH_ID=2# RocketBox sad morphEXPRESSION_MORPH_AMOUNT=0.7EXPRESSION_DURATION=1.2
Built-in Events
Facial Expressions
smile: Makes avatar smile (uses SMILE_MORPH_ID)
sad: Makes avatar look sad (uses SAD_MORPH_ID)
neutral: Returns avatar to neutral expression (resets both morphs)
Vision
- Capture and process screenshot of scene: When asked things such as "What do you see" the agent can capture and process a screenshot to give an understanding of its surroundings.
Placeholder Events (for future implementation)
wave: Wave animation placeholder
nod: Nod head placeholder
Finding Morph Target Indices for Your Avatar
Different avatars have different morph target indices. To find the correct indices:
Load your avatar in Inspector
Click on the avatar name in the scene graph and view the morph IDs on the right side Properties pane under "Morphs"
Update your avatar config file with the correct indices
Creating Custom Events
You can easily add your own custom events:
Step 1: Create the Event Handler Function
In AI_Agent_Avatar.py, after the existing event handlers:
defevent_my_custom_action():"""Description of what this event does"""try:# Your custom code here# Examples:# - Trigger animations: avatar.state(MY_ANIMATION)# - Move objects: object.setPosition([x, y, z])# - Play sounds: viz.playSound('sound.wav')# - Change lighting: viz.clearcolor(viz.RED)print("Custom action executed!")exceptExceptionase:print(f"Error in custom event: {e}")
Step 2: Register the Event
In the register_default_events() function:
defregister_default_events():"""Register all built-in event handlers"""ifUSE_EVENT_SYSTEM:EVENT_REGISTRY.register("smile",event_smile)EVENT_REGISTRY.register("sad",event_sad)EVENT_REGISTRY.register("neutral",event_neutral)EVENT_REGISTRY.register("my_custom_action",event_my_custom_action)# Add this# ... rest of the events
Step 3: Update AI Prompt
Add your custom event to the prompt file so the AI knows about it:
Available events you can trigger:
- smile: Makes you smile
- sad: Makes you look sad
- event: screenshot - Take and analyze a screenshot of what you're seeing
- my_custom_action: Description of what it does
Example Prompts
Chooseprompts/Event_System_Demo.txt for a comprehensive example prompt that teaches the AI how to use events effectively.
Usage Examples
Example 1: Simple Emotional Response
User: "I just won the lottery!"
AI Response (raw):
event: smile
That's incredible! Congratulations! You must be so excited!
What happens:
Avatar smiles (smile morph applied)
User sees: "That's incredible! Congratulations! You must be so excited!"
Modifying Environment and Avatars
Refer to this page for instructions on obtaining assets. Place new assets in resources/environments or resources/avatar/full_body.
To add a new avatar, first place your avatar in the Resources/avatars folder (or reference your path in the avatar config file). Next, navigate to the configs folder and make a copy of one of the configs. Rename it to the name you want to use for your avatar. Modify the config file to reference the path to your avatar, voices you want to use, which animation is the static and talking animation (will need to open in Inspector to see the available animations) and anything else you want to change.
To easily change the position of the avatar in a script, place the avatarStandin model in your environment in Inspector. See this page for more details.
Adding to Existing Scripts
Copy the "configs", "keys", and "prompts" folders, as well as AI_Agent_Avatar.py.
See the script multi_agent_interaction.py to see how multiple agents can interact and communicate with each other. You can modify the individual agents by calling the AIAgent class and setting parameters such as config_path, name and prompt_path.
Paste the copied key into a text file named key.txt and place it in your root SightLab folder.
Set a usage limit on your account if needed: OpenAI Usage.
Eleven Labs (for ElevenLabs Text-to-Speech):
Log in to your ElevenLabs account: https://elevenlabs.io
Go to your profile, locate the "API Key" field, and copy it (note: make sure to enable unrestricted or toggle on access to voices, etc.) or go here https://elevenlabs.io/app/developers/api-keys
Paste the key into a file named elevenlabs_key.txt in your root SightLab folder.
Connecting Assistants: You can connect Assistants via the OpenAI API (custom GPTs not supported).
Issues and Troubleshooting
Microphone/Headset Conflicts: Errors may occur if microphone settings differ between the VR headset and output device.
Character Limit in ElevenLabs: Free tier limits characters to 10,000 (paid accounts get more).
ffplay Error: May require ffmpeg to be installed in the environment path: Download ffmpeg.
The avatar is not responding or speaking: Sometimes it may take some time to process the text to speech. If it seems like it is not responding at all, can going to Script- Stop to stop any previous scripts, or closing and re-opening Vizard.
Tips
Environment Awareness: To give your agent an understanding of the environment press 'h' to take a screenshot that is sent to the agent or simply ask it "What do you see" or "What are we looking at", etc.
Event Trigger for Speech Button: Modify vizconnect to add an event for speaking button hold. Open settings.py in sightlab_utils/vizconnect_configs, and modify mappings for triggerDown and triggerUp or create new ones if needed. More Info on Vizconnect Events
If getting an error with Gemini "out of quota" try using a model with more quota, like gemini-1.5-flash-latest, or enable billing for much higher limits.
Features
Interact and converse with custom AI Large Language Models in a VR or XR simulation in real time.
Choose from various LLM Models. Requires API Key.
Modify avatar appearance, animations, environment, and more. Works with most avatar libraries
Customize personality of agent, contextual awareness, emotional state, interactions, and more. Save as custom agents.
Use speech recognition to converse using your voice or text-based input.
Choose from high-quality voices from Eleven Labs and other libraries (requires API) or customize and create your own.
Train the agent as it adapts using a history of the conversation and its interactions.
Works with all features of SightLab, such as data collection and visualizations, transcript saving, and more.
There is also available a version of this that just runs as an education based tool, where you can select objects in a scene and get information and labels on that item (such as paintings in an art gallery). See this page for that version.