Continuing my investigation to see if it is possible to design a platform that will encourage users to use their auditory memory instead of their visual memory when interacting with language, I decided to do more research and some user testing.
After my initial user testing and prototyping, I reach a point where I realized it is more beneficial to design a platform in which individuals can share both their visual and auditory interpretations of words. I developed a system using MAXMSP and Processing that will encourage individuals to create visuals as well as music for given words.
My previous research was mainly based on how I interact with language and how my auditory memory works when mapping letters to musical notes. As a result, I searched to see an example of text-to-music platforms. Typatone is a platform that is similar to what I have done for my previous project. It assigns a note to letters and provides you with a canvas that you can write, and it will turn to music. I typed two sentences from an article called Experimenting with Ethnography.
One of the things that I have noticed in this platform is because the musical notes are assigned to the letters, we don’t hear the rhythm and patterns that we usually expect in music. And for me to remember a musical piece, is to remember its rhythm.
A Companion to Analysis, Andrea Ballestero, Brit Ross Winthereik, 2021
Therefore, I conducted a group interview with 4 Persian-speaking participants that speak English as their second language. I played 3 different songs for these participants, one in Persian, one just musical, and one in Italian.
The first one was a song by Mahasti named Tiny Heart. As soon as I played the song, everyone started to move and dance to the rhythm. Then they sing along with the song. When I asked about how they remembered this song, they said their body knew it before their brain. Two of them didn’t even remember the song or its lyrics but their body reacted to it. One of them stated that if they hear the word Tiny Heart, they remember this song. Two of them stated that they may think of this song by the way someone might pronounce this word. And they had no visual memory of this song. But they also stated because they have heard this song numerous times so they can’t think of a specific scene or memory.
The second song was A Town With an Ocean View by Joe Hisiashi from the anime called Kiki’s Delivery Service. None of the participants have heard this song. They stated that this song made them remember movie scenes but nothing specific. One participant tried to place themselves in an imaginary scenario and another participant tried to make their own story for this song. All of them agreed that there were so many factors to consider in this song and they all specified that this song has a storytelling side. One participant also stated that they focused on the opening and how the flutes were played.
The last song was Moliendo Café by Julio Iglesias. Participants never heard this version. Again they remembered movie scenes and mostly their own memory. One participant thought of their trip to Greece and others agreed that although they have never been to any Mediterranean country, they remembered specific scenes from those countries. One participant focused on the word Café, and two participants stated if they ever hear this song again, they will remember this interview.

If we take a closer look into these responses, we can conclude that in the absence of verbal communication (whether the music has no vocals or the words are not understandable for participants) when processing sound, individuals tend to rely on their visual memory to interact with music.

This leads me to my second round of interviews. In these sessions, I tried to find a concept that is constant in every language. Then conducted one on one interviews with bilingual students and asked them about Rain. First I asked them to think about the rain. Then asked them to write the word rain in their language. After that, I asked them about the process of thinking about rain. Did they think about memory? A scene or the word rain?
Furthermore, I asked them to tell me a story about rain and write something about that story in their language. Then I asked them to translate that sentence word by word.
Two participants thought of the word Rain in English first. All participants thought about the sound of rain in the beginning. After that, I asked them to create music about the sentence they wrote on two different platforms. One platform was Chrome music lab and the other one was Soundtrap.



This work session helped me realize that abstract representation of words could also help us establish a connection with language. So while I was trying to conduct these interviews, I also tried to learn more about MAX MSP and Processing. Processing is a flexible software sketchbook for coding and it is based on Java. Max MSP is a visual programming language for music. It is a space to create interactive multimedia projects. I developed a code in processing that will generate an abstract art form of audio input. Then used Max MSP to use the music and voice recordings of participants to generate interactive artwork.
This is a low-fidelity prototype, but my idea is by using these abstract visual outputs, we can create a platform where individuals can share their stories and recollection of words and their visual and musical output. This platform will connect people that have generated similar artworks for words.
Reflection:
My process for this research is a constant act of making and testing. Although I will collect important data through this process, it is important to read and follow the works of other intellectuals in this field. There are many factors to consider when we talk about displacement and our interactions with language. Aside from our own experiences, there are cultural and historical aspects that play an important role in our process of communicating through language. Thus, my goal is to study these cultural and historical aspects as well as learn more about digital applications for this project.