Skip to content

Public Speaking

Description

Note: Available Upon request. Message support@worldviz.com or sales@worldviz.com

The subject is seated in a chair and has to give a speech following
instructions that appear on a monitor. A number of factors (such as the
audience attitude) should have an effect on the anxiety of the subject. GSR
and ECG data can be acquired via BIOPAC.

The following parameters can be manipulated by the experimenter via a
separate screen:

  1. Are the other people looking at you?
  2. Are the other people displaying frowns or smiles?
  3. Are the other people showing boredom?

Running the experiment

Run PublicSpeaking_SightLab.py to start the application

Press Spacebar to fade out the gray quad and start the session

The participant is asked to deliver a short speech. They will wear an HMD.
Meanwhile, the experimenter is changing parameters.

Key Mapping:

  • Scroll the text the subject is reading - o or l (lower case o or L) keys (or subject can use LH controller X and Y buttons to do this themselves)
  • Raise or lower the curtain - "Z" or click on the button with the mouse
  • Increase or decrease the number of avatars - "+" or "-" (or click on button)
    Change Avatar attitudes- keys "a-h" (or click on buttons)
Key Action Network Event
z Next Stage NEXT_STAGE_EVENT
↑/↓ Text Scroll TEXT_SCROLL_UP/DOWN_EVENT
o/l Text Scroll TEXT_SCROLL_UP/DOWN_EVENT
a-h Audience Attitude AUDIENCE_ATTITUDE_EVENT
+/= More Avatars AUDIENCE_INCREMENT_EVENT
- Less Avatars AUDIENCE_DECREMENT_EVENT
Space Trial Start/End TRIAL_START/END_KEY_EVENT
F1 Info Toggle F1_INFO_EVENT

Session Replay and Data Files

Run SightLabVR_Replay.py to view scan paths, heatmaps and other visualizations of eye tracking and user data

View the raw data files in the data folder

Modifying

Change the scrolling text with this file config/speechOnSpeech.txt

Change other parameters in config.py

Adjust built-in SightLab parameters such as recording a video of the session to sync with Acqknowledge here

sightlab = sl.SightLab(gui=False, pid = False, screenrecord = False, biopac = True)

Can add other SightLab features with little effort (such as Multi-User, Face Tracking, 360 media, Virtual Screens, Saving transcriptions and audio, etc.)

Multi User Version

Running the Server:

  1. Run PublicSpeaking_SightLab_Server.py
  2. All input is handled on the server machine
  3. Server automatically broadcasts events to connected clients

Running Clients:

  1. Run PublicSpeaking_SightLab_Client.py on each client machine
  2. Clients automatically connect to server
  3. Clients respond to server events automatically