Skip to content

Ai 3d model spawner

AI 3D Model Spawner

Download Latest Version

A dynamic SightLab VR demo that allows you to generate and interact with 3D models in real time using generative AI tools like Meshy or MasterpieceX

Located in ExampleScripts/AI_3D_Model_Spawner


๐Ÿงฉ What It Does

AI 3D Model Spawner allows participants or researchers to spawn objects into a virtual scene on demand, simply by typing text prompts or using speech recognition to speak model requests (e.g. โ€œa glowing mushroomโ€). The models are created via online APIs, refined, and automatically placed in the VR environment. Can also set USE_AI to False if just wanting to drag and drop models and set up a scene.

  • ๐Ÿ”ค Text โ†’ 3D model
  • ๐Ÿง  Generative AI (Meshy and MasterpieceX currently supported, others will be added)
  • ๐Ÿ‘๏ธ Integrated gaze tracking and grabbing via SightLab
  • ๐ŸŒ— Toggle lighting with B key
  • ๐Ÿ” Generate new models any time using N key

Direct Manipulation & Management (Keyboard)

You can translate, rotate, scale, select, delete, store, and load models created or dropped at runtime.

Active Selection

  • Next/Prev: ] / [
  • Jump First/Last: { (Shift+[) / } (Shift+])
  • A subtle highlight/marker indicates the current selection. All actions below target the active model.
  • Can also click using the mouse on the desktop mirrored view

Translate (move)

  • X axis: g ( +X / right ) โ€ข G ( โˆ’X / left )
  • Y axis: h ( +Y / up ) โ€ข H ( โˆ’Y / down )
  • Z axis: j ( +Z / forward ) โ€ข J ( โˆ’Z / back )

Hold the key to move smoothly. Motion speed can be tuned in code via MOVE_STEP_M.

Rotate (Euler)

  • Yaw (around Y): t / T
  • Pitch (around X): y / Y
  • Roll (around Z): u / U

Hold to rotate smoothly. Rotation speed can be tuned via ROT_STEP_DEG.

Scale (uniform)

  • Scale Down / Up: k / m

Same behavior as before; acts on the selected model.

Delete / Remove

  • Delete + newest JSON: x
  • Delete + all JSONs: X (Shift+x)

Removes the active model from the scene and cleans up its saved-state JSON file(s) if present.

Drag & Drop Models

  • Drop .osgb, .glb, or .gltf files directly onto the Vizard window.
  • The model is added with vizfx.addChild and becomes the active selection.
  • Spawns in front of the user with Y aligned to current eye height.

๐Ÿ’พ Storing & Loading Models

Store (Save) Model State

  • When a model is generated or dragged in, its source path is recorded.
  • Use your scriptโ€™s save action (exposed in the spawner) to write a JSON snapshot into saved_models/ (includes model path, transform, optional metadata).
  • Re-saving overwrites the most recent JSON for that same model path, or you can keep multiple timestamped variants (configurable).

Load Saved Models

  • Use the accompanying Model Loader (Model_Loader.py) to:
  • Browse and load JSON snapshots from saved_models/.
  • Cycle selection ([, ], {, }) and manipulate just like in the spawner.
  • Delete current selection (x / X) which also removes its associated JSON(s).

Tip: You can maintain a curated library of models + placements by saving JSONs per object/condition and reloading them as needed.


โš™๏ธ Setup Instructions

1. ๐Ÿ“ฆ Install Required Python Libraries

pip install mpx-genai-sdk pillow requests

SightLab and Vizard dependencies must already be installed and configured.


2. ๐Ÿ”‘ Get an API Key

๐Ÿ”น Meshy

  • Visit: https://www.meshy.ai/
  • Log in and go to: https://www.meshy.ai/api
  • Copy your API key
  • Requires Paid Subscription (minimum $20/month)

๐Ÿ”น MasterpieceX


3. ๐Ÿ“ Add API keys to key files

In windows search type "cmd" and enter the following

๐Ÿ”น For Meshy:

setx MESHY_API_KEY "your-meshy-api-key"

๐Ÿ”น For MasterpieceX:

setx MPX_SDK_BEARER_TOKEN "your-mpx-bearer-token"

โš ๏ธ Important: Restart Vizard (or your terminal) after setting these.


4. How to Run

  1. Run AI_3D_Model_Spawner.py
  2. Choose your model generation library (MasterpieceX or Meshy currently) and hardware
  3. Press N to enter a text prompt (e.g., a red futuristic drone)
  4. Press and hold c or the RH grip button to speak commands, let go to send command
  5. A model is generated via the preview API call (geometry only). Before model loads a placeholder sphere will appear.
  6. Once loaded, it is automatically refined with texture and PBR
  7. The refined model replaces the preview in-scene for Meshy (for MasterpieceX there is no preview model)
  8. Press B to toggle lighting on/off to see what looks better
  9. Press N or grip button or c again to generate another model
  10. Grab the new model using the trigger buttons or left mouse button
  11. Press trigger or spacebar to end the trial

Estimated time to model completion: MasterpieceX - 1-2 minutes

Meshy - 2-4 minutes

Config Global Options

# ===== Passthrough / AR Settings =====
USE_PASSTHROUGH = False

# ===== Environment Settings =====
ENVIRONMENT_MODEL = 'sightlab_resources/environments/RockyCavern.osgb'

# ===== GUI Options =====
USE_GUI = False

# ===== Speech Recognition =====
USE_SPEECH_RECOGNITION = True

# ===== AI Model Generation =====
USE_AI = True

# ===== Model Options =====
MODEL_STARTING_POINT = [0, 1.5, 2]
MODEL_STARTING_POINT_PASSTHROUGH = [0, 1.5, 0.8]

# ===== Data Saving =====
SAVE_DATA = False

๐Ÿง  Example Use Cases

Field Idea Example
๐Ÿง  Psychology Generate phobic stimuli like a spider, a syringe
๐Ÿฆด Education Spawn anatomy models: human skull, brain cross-section
โž• Math/Cognition Test symbolic vs object views: 3 apples, a number line
๐Ÿงช UX/VR Dev Prototype object-based interactions in VR
๐ŸŽฎ Game Studies On-demand in-scene assets for testing

๐Ÿ’ก Tips

  • All models are saved into the /Resources/ folder automatically
  • Can delete Meshy preview models once refined
  • Object names are auto-generated from prompt text for traceability
  • You can grab models with controllers or interact via gaze, as well as move,translate, scale and load
  • Additionally, supports dragging and dropping additional models that are .glb, .glTF or .osgb
  • SightLab automatically logs gaze and grab data per object

Was this page helpful?