Invoke offers a new way of working with spatial audio. As a spatial audio production tool, it explores different ways to embody audio workflow. The main feature of the app is a voice-based drawing tool that is used to make trajectories for spatial audio objects.
Voice Drawing
The interaction dynamic for drawing trajectories is a new way to work with spatial audio. Combining input from the voice and hand provides a continuous space-time method of composition. Voice Drawing allows detailed production of trajectories to spatially and temporally mix tracks. For a user, you position a “pen” in virtual space, pull the controller trigger, make a sound with your voice, and build shapes in space. After creating a Voice Sketch, the line data transforms into a control point based bezier curve, a trajectory, that retains the volume information of the voice input. Then, by placing an audio object on a trajectory, an audio object’s volume is automated based on the recorded volume of the voice.
When using Voice Drawing collaboratively, the volume information from both collaborators is used to draw each line. This means two lines could be drawn with the same volume information in two different places. But also that when you or your partner draws a line, the resulting trajectory is not totally controlled by you.



Object Interaction
Traditionally object selection and manipulation in VR is considered either direct or indirect. Direct interaction uses a natural metaphor, you grab a thing close enough to touch, like you would a cup or a ball. Indirect interaction uses a form of mediation to allow action at a distance, like picking up a car from a crane. Each of these methods gives different sensations of embodiment but also changes how to design spaces for action.
Invoke uses both direct and indirect action; this allows precise control but also extended interaction spaces. For the user, laser-based object interaction sits on top of direct spatial selection and manipulation. What this means is you can either walk up to an object and grab it or, from a distance, aim and grab an object. When holding objects, you can pull them closer or push them further away using controls on the hardware VR controller.
The system is built on top of the VR control system by Hecomi, VRGrabber.
Menus
While it would be possible to put all functionality “in-the-world”, it was decided to use a set of Menus to manage the various options and abstractions. There are three main menu types:
- Mixer – a timeline and audio mixer to control gain, solo, mute functions and spatial parameters like Doppler, Reverb Send, and Volumetric Radius.
- Trajectory Manager – a means to – overview, toggle visibility, delete – trajectories.
- Hand Menu – a way to manage world space menus and other global settings.

This system is built on top of some great work by Aesthetic Interactive, Hover UI Kit.
Embodiment

As a shared VR experience, the mapping of your body to the virtual space is an important feature. Using an Inverse Kinematics system a VR puppet maps to your movements. This system uses the HMD, controllers and a tracking puck attached to your waist. Sometimes the mapping can go a bit funny though…

Assorted Features
Transparency
The picture highlights that the level of opacity on an object has meaning. For instance, when an audio source is muted, the object has a see-through quality. Also to manage the complexity of the space, the trajectory lines can be made semi-transparent, this also removes access to control points.

Getting Around and staying in touch
As the interaction space provided is quite large, a teleport system was added. Also as a shared experience, spatialised voice communication system is available.
Non-realistic scaling
Given the size of the interaction space, objects change size depending on their distance from the user, getting bigger the farther away they get. This does introduce a subtle set of issues but gains improved usability for selection and manipulation from a distance. One issue is the perceptual confusion of pushing something into the distance and it gets bigger. The other issue is that for each user, there is asymmetric perceptual information about space and objects.
One thought on “Invoke Features”