A dynamic SightLab VR demo that allows you to generate and interact with 3D models in real time using generative AI tools like Meshy or MasterpieceX
Located in ExampleScripts/AI_3D_Model_Spawner
🧩 What It Does
AI 3D Model Spawner allows participants or researchers to spawn objects into a virtual scene on demand, simply by typing text prompts or using speech recognition to speak model requests (e.g. “a glowing mushroom”). The models are created via online APIs, refined, and automatically placed in the VR environment. Can also set USE_AI to False if just wanting to drag and drop models and set up a scene.
🔤 Text → 3D model
🧠 Generative AI (Meshy and MasterpieceX currently supported, others will be added)
👁️ Integrated gaze tracking and grabbing via SightLab
🌗 Toggle lighting with B key
🔁 Generate new models any time using N key
Direct Manipulation & Management (Keyboard)
You can translate, rotate, scale, select, delete, store, and load models created or dropped at runtime.
Active Selection
Next/Prev:] / [
Jump First/Last:{ (Shift+[) / } (Shift+])
A subtle highlight/marker indicates the current selection. All actions below target the active model.
Can also click using the mouse on the desktop mirrored view
Translate (move)
X axis:g ( +X / right ) • G ( −X / left )
Y axis:h ( +Y / up ) • H ( −Y / down )
Z axis:j ( +Z / forward ) • J ( −Z / back )
Hold the key to move smoothly. Motion speed can be tuned in code via MOVE_STEP_M.
Rotate (Euler)
Yaw (around Y):t / T
Pitch (around X):y / Y
Roll (around Z):u / U
Hold to rotate smoothly. Rotation speed can be tuned via ROT_STEP_DEG.
Scale (uniform)
Scale Down / Up:k / m
Same behavior as before; acts on the selected model.
Delete / Remove
Delete + newest JSON:x
Delete + all JSONs:X (Shift+x)
Removes the active model from the scene and cleans up its saved-state JSON file(s) if present.
Drag & Drop Models
Drop .osgb, .glb, or .gltf files directly onto the Vizard window.
The model is added with vizfx.addChild and becomes the active selection.
Spawns in front of the user with Y aligned to current eye height.
💾 Storing & Loading Models
Store (Save) Model State
Press 1 to save and 2 to load models
When a model is generated or dragged in, its source path is recorded.
Use your script’s save action (exposed in the spawner) to write a JSON snapshot into saved_models/ (includes model path, transform, optional metadata).
Re-saving overwrites the most recent JSON for that same model path, or you can keep multiple timestamped variants (configurable).
Load Saved Models
Use the accompanying Model Loader (Model_Loader.py) to:
Browse and load JSON snapshots from saved_models/.
Press l to load models, a to load all models, c to clear and i to list loaded models
Cycle selection ([, ], {, }) and manipulate just like in the spawner.
Delete current selection (x / X) which also removes its associated JSON(s).
Tip: You can maintain a curated library of models + placements by saving JSONs per object/condition and reloading them as needed.
⚙️ Setup Instructions
1. 📦 Install Required Python Libraries
pipinstallmpx-genai-sdkpillowrequests
SightLab and Vizard dependencies must already be installed and configured.