An avatar represents the user in the virtual environment. The avatar can be visualized from either a 1st or 3rd person point of view and is animated using tracking data and inverse kinematics (IK). Optionally the avatar can be hidden in which case only hand(s) are rendered for tasks such as grabbing.
Avatar IK refers to animating an avatar using tracking data, for either 1st or 3rd person views of the avatar. Vizconnect provides a visual tool to map trackers to avatar body parts.
Note: Vizard’s avatar IK is not intended to serve as an exhaustive inverse kinematics engine, rather as an effective intermediate solution.
Currently, Vizard’s avatar IK library requires the character has a skeleton that matches either the standard or HD Compete Character.
See the vizconnect tutorials for more on using avatars in the configuration GUI.
Calibration is necessary for avatar performance. A calibration does two things:
Note: It is critical that the user's head tracker be properly aligned to the point between the user's eyes. To account for any offset, add a preTrans value in the tracker's offset dialog. The calibration process takes the head tracker as ground-truth, making no corrections to it, as the whole calibration is based on it.
To calibrate, follow the steps below:
Hand gestures can be mapped to input
signals through the Gestures dialog box. Hand gestures
are often used in conjunction with tool actions and transport movements.
For example, a fist gesture helps to show that a grabber tool is activated.
To set this up, the same input signal is mapped to the gesture through the Gesture dialog and the grabber through the Tools tab.
To create a 1st person view of the avatar, place the display under the avatar's head attachment point in the Scene Graph. This allows the user to see a representation of themselves and know where their hands and feet are in the virtual environment. For a 3rd person avatar view, place the display under a tracker, transport, or fixed group node that is not attached to the avatar.
Attachment points are linkable locations on the head and hands of an avatar. Typically displays are linked to the head while tools are linked to either hands or head. The following image shows an example configuration using attachment points:
It's also possible to link objects to attachment points in the script that imports the configuration file. The following code links a phone to an avatar hand:
The following command returns a handle to the wrapped avatar using the name defined in the configuration:
Command | Description |
vizconnect.getAvatar(name) | Returns a handle to the wrapped avatar. |
Wrapped avatar objects have the following methods in addition to the base wrapped object methods:
Method | Description |
<WrappedAvatar>.calibrate() | Calls the calibration function for the avatar's animator. |
<WrappedAvatar>.getAttachmentPoint(name) |
Returns an attachment point. Name: can be head, l_hand, or r_hand |
<WrappedAvatar>.getAttachmentPointNames() |
Returns a list of strings identifying the attachment points associated with an avatar. |
<WrappedAvatar>.getHandModels(activeOnly = True) |
Gets a list of the hand objects instantiated for this avatar. Note that this will return only hand models which have gestures or can be articulated. activeOnly: if True returns only the hand objects with associated trackers |
<WrappedAvatar>.getHands(activeOnly=True) |
Gets a list of the hand objects instantiated for this avatar. This will return any bone/object registered to the either the right or left hands. activeOnly: if True returns only the hand objects with associated trackers |
<WrappedAvatar>.getHorizontalMovementScale() | Returns the horizontal movement scale. |
<WrappedAvatar>.getMovementScale() | Returns the scale of movement applied to the avatar. |
<WrappedAvatar>.getPaused() | Returns the paused state of the avatar. |
<WrappedAvatar>.getVerticalMovementScale() |
Returns the vertical movement scale. |
<WrappedAvatar>.setHorizontalMovementScale(scale) |
Allows adjustment of the scale of horizontal movement of an avatar. scale: Float value |
<WrappedAvatar>.setMovementScale(scale) |
Allows adjustment of the scale of horizontal and vertical movements of an avatar. scale: list with float values for horizontal and vertical scale |
<WrappedAvatar>.setPaused(state) |
Sets the paused state of the avatar, will pause all virtual trackers used by the avatar. state: can be viz.ON, viz.OFF, or viz.TOGGLE |
<WrappedAvatar>.setVerticalMovementScale(scale) |
Allows adjustment of the scale of vertical movement of an avatar. scale: Float value |
<WrappedAvatar>.setVisible(state=viz.TOGGLE) |
Sets the visible state of the avatar state: can be viz.ON, viz.OFF, or viz.TOGGLE |
The following code shows how to get a handle to the hand models of the head and hand avatar:
The hands of the mark avatar are meshes of a larger model. It is not possible to get a handle to the mesh but some commands can be called on the avatar and passed the mesh as a parameter: