Pages

Monday, January 20, 2014

User Tests

Last week we ran two user tests to check on potential issues. After a short introduction of our re:aktion application the user had to "discover" the functions on his own. We really want to make sure that the application is as intuitive as possible. A quick learning process is key to grab the users attention. In this post we're going to present our findings and the changes we applied. 

First a short summary the movement-trigger implemented at the beginning of the user-tests:
height of the right hand (continuous) - cutoff frequency of filter
height of the right hand (discrete) - switching filter lowpass / highpass  / off
height of the left hand - delay amount
proximity to the display - volume
horizontal position - panning

First test

The user was impressed by the tracking of his bodyshape. The recognition of overlapping bodyparts worked well and was appreciated. The user realised in no time that his right hand applies a filter. Panning and volume changes has been identified after a few tries. The user tried clapping, which of course had no effect. The left hand was not used to change the delay settings. It appears that the changes weren't audible enough and too subtle.

Fixes: The delay effect has been amplified and a flanger effect is added

Second test

The second user was more active, didn't identify the nature of the effects though. He realised which movement seemed to trigger an effect and was able to trigger them on purpose. Some uncertainty remained about a few movements. The user thought they had an impact they actually had not. He was not sure if moving both arms at the same time would amplify the effect of the filter. The user tried jumping and moving his legs. In a multi-user constellation they had to deal with major problems concerning the identification of the directing user.

Fixes: The user covering the largest area is "in charge". This avoids conflicts with unintentional switching of user focus when passing in the background. Furthermore, the brightness of the point cloud displaying the bodyshape has been adjusted to give better feedback.

Saturday, January 11, 2014

Software Implementation

Initial Implementation

The initial approach was the following:
The data stream of the kinect was interpreted by a C# program, implementing the offical Kinect SDK. The gestures should be directly mapped to their corresponding feature, which is then sent out via OSC (implemented using the Bespoke OSC Library). The data of the OSC is then sent through an Ethernet connection to a second computer running Ableton Live, where all sound generation and manipulation should happen. The OSC stream was received and mapped through the Max4Live plugin Livegrabber.

This rather complicated setup was used because for both overall performance and the prior experience of using both Ableton Live and the .net C# environment. After many unsuccessful attempts on synchronising the many components, this setup configuration was deemed too complex and was left in favor of the setup described in the following paragraph.

Setup

The final setup was surprisingly simple: Processing was used to both read the Kinect data and interpret it as well as play music and modify it accordingly. For the first part, the Simple OpenNI Library was used and Minim for the second.
Performance is good enough to guarantee smooth image drawing and sound manipulation.

Mapping

The mapping is still a work-in-progress:
  • Overall volume is controlled by the user's distance.
  • Cutoff frequency of a low-pass filter is mapped to the relative height of the right hand.
  • Delay volume is mapped to the relative height of the left hand.

For complexity reasons, it was decided to first implement full functionality for a single user before introducing multi-user interaction. If multiple users are detected, only the last one has an active role.

Visual Interface

As planned the visual interface is implemented through a grey scale point-cloud that draws the depth map of the camera and  differentiates multiple users by colour. If no player is present, a MOVE! message is displayed. Below, a screenshot of the visual interface during testing phase can be seen. Please note that the representation is much better in full screen.


Todo

  • Smoothing for big parameter jumps (probably simple interpolation)
  • Adding more than one musical pattern
  • Proper multi-user support
  • Implement special gestures (clapping etc)