Vizard 8 » Reference » Vizconnect » Advanced GUI » Avatars
8.1

Avatars

An avatar represents the user in the virtual environment. The avatar can be visualized from either a 1st or 3rd person point of view and is animated using tracking data and inverse kinematics (IK). Optionally the avatar can be hidden in which case only hand(s) are rendered for tasks such as grabbing.

Avatar Animation via Inverse Kinematics

Avatar IK refers to animating an avatar using tracking data, for either 1st or 3rd person views of the avatar. Vizconnect provides a visual tool to map trackers to avatar body parts.

Note: Vizard’s avatar IK is not intended to serve as an exhaustive inverse kinematics engine, rather as an effective intermediate solution.

Supported Characters

Currently, Vizard’s avatar IK library requires the character has a skeleton that matches either the standard or HD Compete Character.

Avatar IK setup

  1. Add the tracker(s) used to animate the avatar from the Trackers tab.
  2. Add an input device from the Inputs tab to use with the calibration function.
  3. (optional) Add the input device(s) used for hand gestures from the Inputs tab.
  4. Add an avatar from the Avatars tab.
  5. Open the Animator dialog box and map trackers to avatar body parts.
  6. Press the Mappings button to map an input signal to the calibration function.
  7. (optional) Press the Gestures button to map input signals to gestures.

See the vizconnect tutorials for more on using avatars in the configuration GUI.

T-Pose Calibration

Calibration is necessary for avatar performance. A calibration does two things:

  1. The avatar is uniformly scaled such that the avatar's eyeheight matches the user's eyeheight.
  2. Each tracker (except the head tracker, which is taken as-is) is checked against the avatar's skeleton (in an ideal T-pose), and a correction offset is applied to each tracker to counter the discrepancies.

Note: It is critical that the user's head tracker be properly aligned to the point between the user's eyes. To account for any offset, add a preTrans value in the tracker's offset dialog. The calibration process takes the head tracker as ground-truth, making no corrections to it, as the whole calibration is based on it.

To calibrate, follow the steps below:

  1. With the application running, the user faces the tracking system's north direction (0 degrees yaw) and takes the T-Pose position.
  2. Trigger the input signal mapped to the calibration function.
  3. User resumes activity.

Gestures

Hand gestures can be mapped to input signals through the Gestures dialog box. Hand gestures are often used in conjunction with tool actions and transport movements. For example, a fist gesture helps to show that a grabber tool is activated. To set this up, the same input signal is mapped to the gesture through the Gesture dialog and the grabber through the Tools tab.

Avatar Perspectives

To create a 1st person view of the avatar, place the display under the avatar's head attachment point in the Scene Graph. This allows the user to see a representation of themselves and know where their hands and feet are in the virtual environment. For a 3rd person avatar view, place the display under a tracker, transport, or fixed group node that is not attached to the avatar.

Attachment Points

Attachment points are linkable locations on the head and hands of an avatar. Typically displays are linked to the head while tools are linked to either hands or head. The following image shows an example configuration using attachment points:

It's also possible to link objects to attachment points in the script that imports the configuration file. The following code links a phone to an avatar hand:

avatar = vizconnect.getAvatar()
rhand = avatar.getAttachmentPoint('r_hand').getNode3d()
viz.link(rhand,phone)

Avatar API

The following command returns a handle to the wrapped avatar using the name defined in the configuration:

Command Description
vizconnect.getAvatar(name) Returns a handle to the wrapped avatar.

Wrapped avatar objects have the following methods in addition to the base wrapped object methods:

Method Description
<WrappedAvatar>.calibrate() Calls the calibration function for the avatar's animator.
<WrappedAvatar>.getAttachmentPoint(name)

Returns an attachment point.

Name: can be head, l_hand, or r_hand

<WrappedAvatar>.getAttachmentPointNames()

Returns a list of strings identifying the attachment points associated with an avatar.

<WrappedAvatar>.getHandModels(activeOnly = True)

Gets a list of the hand objects instantiated for this avatar. Note that this will return only hand models which have gestures or can be articulated.

activeOnly: if True returns only the hand objects with associated trackers

<WrappedAvatar>.getHands(activeOnly=True)

Gets a list of the hand objects instantiated for this avatar. This will

return any bone/object registered to the either the right or left hands.

activeOnly: if True returns only the hand objects with associated trackers

<WrappedAvatar>.getHorizontalMovementScale() Returns the horizontal movement scale.
<WrappedAvatar>.getMovementScale() Returns the scale of movement applied to the avatar.
<WrappedAvatar>.getPaused() Returns the paused state of the avatar.
<WrappedAvatar>.getVerticalMovementScale()

Returns the vertical movement scale.

<WrappedAvatar>.setHorizontalMovementScale(scale)

Allows adjustment of the scale of horizontal movement of an avatar.

scale: Float value

<WrappedAvatar>.setMovementScale(scale)

Allows adjustment of the scale of horizontal and vertical movements of an avatar.

scale: list with float values for horizontal and vertical scale

<WrappedAvatar>.setPaused(state)

Sets the paused state of the avatar, will pause all virtual trackers used by the avatar.

state: can be viz.ON, viz.OFF, or viz.TOGGLE

<WrappedAvatar>.setVerticalMovementScale(scale)

Allows adjustment of the scale of vertical movement of an avatar.

scale: Float value

<WrappedAvatar>.setVisible(state=viz.TOGGLE)

Sets the visible state of the avatar

state: can be viz.ON, viz.OFF, or viz.TOGGLE

Getting Hand Models

The following code shows how to get a handle to the hand models of the head and hand avatar:

# Works with 'head and hands' avatar, not 'mark'
avatar = vizconnect.getAvatar()
hands = avatar.getHands()
leftHand = hands[vizconnect.AVATAR_L_HAND]
rightHand = hands[vizconnect.AVATAR_R_HAND]
rightHand.alpha(0.5)

The hands of the mark avatar are meshes of a larger model. It is not possible to get a handle to the mesh but some commands can be called on the avatar and passed the mesh as a parameter:

avatar = vizconnect.getAvatar()
mark = avatar.getRaw()
# Set alpha of hand mesh
mark.alpha(0.5,'mark_hand_r.cmf')