Ai 3d model spawner
AI 3D Model Spawner

A dynamic SightLab VR demo that allows you to generate and interact with 3D models in real time using generative AI tools like Meshy or MasterpieceX
Located in
ExampleScripts/AI_3D_Model_Spawner
🧩 What It Does
AI 3D Model Spawner allows participants or researchers to spawn objects into a virtual scene on demand, simply by typing text prompts or using speech recognition to speak model requests (e.g. “a glowing mushroom”). The models are created via online APIs, refined, and automatically placed in the VR environment. Can also set USE_AI to False if just wanting to drag and drop models and set up a scene.
- 🔤 Text → 3D model
- 🧠 Generative AI (Meshy and MasterpieceX currently supported, others will be added)
- 👁️ Integrated gaze tracking and grabbing via SightLab
- 🌗 Toggle lighting with
Bkey - 🔁 Generate new models any time using
Nkey
Direct Manipulation & Management (Keyboard)
You can translate, rotate, scale, select, delete, store, and load models created or dropped at runtime.
Active Selection
- Next/Prev:
]/[ - Jump First/Last:
{(Shift+[) /}(Shift+]) - A subtle highlight/marker indicates the current selection. All actions below target the active model.
- Can also click using the mouse on the desktop mirrored view
Translate (move)
- X axis:
g( +X / right ) •G( −X / left ) - Y axis:
h( +Y / up ) •H( −Y / down ) - Z axis:
j( +Z / forward ) •J( −Z / back )
Hold the key to move smoothly. Motion speed can be tuned in code via
MOVE_STEP_M.
Rotate (Euler)
- Yaw (around Y):
t/T - Pitch (around X):
y/Y - Roll (around Z):
u/U
Hold to rotate smoothly. Rotation speed can be tuned via
ROT_STEP_DEG.
Scale (uniform)
- Scale Down / Up:
k/m
Same behavior as before; acts on the selected model.
Delete / Remove
- Delete + newest JSON:
x - Delete + all JSONs:
X(Shift+x)
Removes the active model from the scene and cleans up its saved-state JSON file(s) if present.
Drag & Drop Models
- Drop
.osgb,.glb, or.gltffiles directly onto the Vizard window. - The model is added with
vizfx.addChildand becomes the active selection. - Spawns in front of the user with Y aligned to current eye height.
💾 Storing & Loading Models
Store (Save) Model State
- Press
1to save and2to load models - When a model is generated or dragged in, its source path is recorded.
- Use your script’s save action (exposed in the spawner) to write a JSON snapshot into
saved_models/(includes model path, transform, optional metadata). - Re-saving overwrites the most recent JSON for that same model path, or you can keep multiple timestamped variants (configurable).
Load Saved Models
- Use the accompanying Model Loader (
Model_Loader.py) to: - Browse and load JSON snapshots from
saved_models/. - Press
lto load models,ato load all models,cto clear andito list loaded models - Cycle selection (
[,],{,}) and manipulate just like in the spawner. - Delete current selection (
x/X) which also removes its associated JSON(s).
Tip: You can maintain a curated library of models + placements by saving JSONs per object/condition and reloading them as needed.
⚙️ Setup Instructions
1. 📦 Install Required Python Libraries
pip install mpx-genai-sdk pillow requests
SightLab and Vizard dependencies must already be installed and configured.
2. 🔑 Get an API Key
🔹 Meshy
- Visit: https://www.meshy.ai/
- Log in and go to: https://www.meshy.ai/api
- Copy your API key
- Requires Paid Subscription (minimum $20/month)
🔹 MasterpieceX
- Sign up & docs: https://developers.masterpiecex.com
- Create your app and generate a Bearer Token here:
👉 https://developers.masterpiecex.com/apps - 💰 No subscription required — just purchase API credits
- Example: 2,500 credits = $15
- You can purchase more as needed
3. 📁 Add API keys to key files
In windows search type "cmd" and enter the following
🔹 For Meshy:
setx MESHY_API_KEY "your-meshy-api-key"
🔹 For MasterpieceX:
setx MPX_SDK_BEARER_TOKEN "your-mpx-bearer-token"
⚠️ Important: Restart Vizard (or your terminal) after setting these.
4. How to Run
- Run
AI_3D_Model_Spawner.py - Choose your model generation library (MasterpieceX or Meshy currently) and hardware
- Press
Nto enter a text prompt (e.g.,a red futuristic drone) - Press and hold
cor the RH grip button to speak commands, let go to send command - A model is generated via the preview API call (geometry only). Before model loads a placeholder sphere will appear.
- Once loaded, it is automatically refined with texture and PBR
- The refined model replaces the preview in-scene for Meshy (for MasterpieceX there is no preview model)
- Press
Bto toggle lighting on/off to see what looks better - Press
Nor grip button orcagain to generate another model - Grab the new model using the trigger buttons or left mouse button
- Press trigger or spacebar to end the trial
Estimated time to model completion:
MasterpieceX - 1-2 minutes
Meshy - 2-4 minutes
Config Global Options
# ===== Passthrough / AR Settings =====
USE_PASSTHROUGH = False
# ===== Environment Settings =====
ENVIRONMENT_MODEL = 'sightlab_resources/environments/RockyCavern.osgb'
# ===== GUI Options =====
USE_GUI = False
# ===== Speech Recognition =====
USE_SPEECH_RECOGNITION = True
# ===== AI Model Generation =====
USE_AI = True
# ===== Model Options =====
MODEL_STARTING_POINT = [0, 1.5, 2]
MODEL_STARTING_POINT_PASSTHROUGH = [0, 1.5, 0.8]
# ===== Data Saving =====
SAVE_DATA = False
🧠 Example Use Cases
| Field | Idea Example |
|---|---|
| 🧠 Psychology | Generate phobic stimuli like a spider, a syringe |
| 🦴 Education | Spawn anatomy models: human skull, brain cross-section |
| ➕ Math/Cognition | Test symbolic vs object views: 3 apples, a number line |
| 🧪 UX/VR Dev | Prototype object-based interactions in VR |
| 🎮 Game Studies | On-demand in-scene assets for testing |
💡 Tips
- All models are saved into the
/Resources/folder automatically - Can delete Meshy preview models once refined
- Object names are auto-generated from prompt text for traceability
- You can grab models with controllers or interact via gaze, as well as move,translate, scale and load
- Additionally, supports dragging and dropping additional models that are .glb, .glTF or .osgb
- SightLab automatically logs gaze and grab data per object