Last week we ran two user tests to check on potential issues. After a short introduction of our re:aktion application the user had to "discover" the functions on his own. We really want to make sure that the application is as intuitive as possible. A quick learning process is key to grab the users attention. In this post we're going to present our findings and the changes we applied.
First a short summary the movement-trigger implemented at the beginning of the user-tests:
height of the right hand (continuous) - cutoff frequency of filter
height of the right hand (discrete) - switching filter lowpass / highpass / off
proximity to the display - volume
horizontal position - panning
The user was impressed by the tracking of his bodyshape. The recognition of overlapping bodyparts worked well and was appreciated. The user realised in no time that his right hand applies a filter. Panning and volume changes has been identified after a few tries. The user tried clapping, which of course had no effect. The left hand was not used to change the delay settings. It appears that the changes weren't audible enough and too subtle.
Fixes: The delay effect has been amplified and a flanger effect is added
The second user was more active, didn't identify the nature of the effects though. He realised which movement seemed to trigger an effect and was able to trigger them on purpose. Some uncertainty remained about a few movements. The user thought they had an impact they actually had not. He was not sure if moving both arms at the same time would amplify the effect of the filter. The user tried jumping and moving his legs. In a multi-user constellation they had to deal with major problems concerning the identification of the directing user.
Fixes: The user covering the largest area is "in charge". This avoids conflicts with unintentional switching of user focus when passing in the background. Furthermore, the brightness of the point cloud displaying the bodyshape has been adjusted to give better feedback.