We got a bunch of really interesting results through our user testing session.
DJ Hear That!? can be considered a slightly immersive experience since it completely engages the user,
requiring them to focus in order to create something. The project remains open-ended allowing the users to explore
numerous sound and visual compositions.
While some experimented with typographic compositions with complete disregard
for the sounds, others chose to focus on soundscaping. The influence of sound or visual or both, an abstract
juxtaposition of the two, can be derived from the various outcomes.
Using a large television screen for display and audio, a Macbook to run the programme and a
MIDI Controller set up on a wooden pedestal, we converted a section of DW103 for our user testing.
Initially, we decided to conceal the purpose of each button and allowed the users to play around with the whole
MIDI controller but after the first test, we realised that in doing that, the users would mess up some of the MIDI
settings, causing the programme not to work. So we proceeded to provide parameters for the users and inform them about
the buttons that they can control, without telling them what they do. This approach would mean that the users have a
little less clarity. However, it made our users want to return and give it another go after getting a better idea of it,
which serves as an example of learning from experience.
We thought there would be more people doing button mashing, but surprisingly only one person, Toby, did so. Our thoughts
were that the loud noises might have scared some people and made them become more cautious about the keys they were pressing,
discouraging them from button mashing.
One of the more interesting outcomes was by Jodi who created this massive white composition through the layering of type
creating an almost white screen, in the same time creating this loud white noise sound, correlating the type to the sound
where when both layers add up they create this corresponding white screen white noise, sound-visual.
Johnson managed to figure out that the vowel “A, E, I, O, U” had special sounds and spelt his name and Yuqi spelt out “YEET”,
while others like Joanne and Vanessa tried to draw faces using the letters and Annabelle seemed to focus more on the sound.
Most users found the project interesting and they had always wanted to try out being a DJ, although not in this sense.
Most of them managed to figure out that the sounds were city ambient sounds. It was a 50-50 split between users who realised the correlation
between the sound and the typography. While most users said that they were influenced by both the type and sound or just by type, only a
small percentage were influenced more by sound in their final outcomes.
Users found it interesting that there was a narrative going on while you were typing and sort of forms a story, and they enjoyed being
able to create a musical soundscape from daily sounds. They also said that the project gave enhanced typographic meaning through the
adjustments of sound and type and it also created noise music. The project could have also been thought of as a multimedia approach for
a tool for creating design and sparked curiosity in most of our users.
Some feedback on improvements that we received was that users would like more variations in type/form such as colour and rotation.
One of the most requested feedback was to include a backspace button and to improve on the interface for easier interaction. They also
wanted a more standardized volume and consistent rhythm and beats to help with creating a soundscape.
Some in-class feedback includes that we should invite other people, non-designers, to join in the user testing as they might have
different takeaways, maybe some musicians might be more focused on sound. Feedback from Andreas was that that the interface could
be further improved as some people might get overwhelmed by the MIDI Controller, we could also explore using an Arduino to create our
own simplified MIDI Controller. Sounds could be better controlled and we could introduce more silent or quieter sounds as sometimes
things could get quite overwhelming and that would turn people to focus less on sound but more on the visuals.
Feedback from Joanne was that we could improve on the relationship between sound and type and that she was actually more
interested in the sound than type cause she likes noise music. She also mentioned that we could explore overlaps on-screen
between the white and black forms that emerge and that sometimes overlaps might cause some weird distortions. Also, we
should put this on a website!