Timeline: March 2023 - June 2023
Role: UX Researcher/UI/UX Designer
Project Type: Class Project
Skills: Accessible Design, Prototyping, Design Systems, Figma
This project was created in a class dedicated to designing for accessibility. We were given no limitations on the type of user we are designing for, the problem we are hoping to design a solution for, or the power of the technology within the platform we create. The only nonnegotiable criteria was that the platform utilizes the power of AI.
Create a platform that utilizes AI to improve the live music listening experience for members of the Deaf or Hard-of-Hearing (DHH) community.
Person #1: T
Person #2: O
Person #3: D
We interviewed these users following a script. Some key questions that guided us included:
👂🏻 Hearing
🎵 Music
🌐 Needs
📱 Music Sharing
“I like it when people hit high notes. [...] I usually listen to music on the highest volume in order to hear everything. I wish there was a way to upload an audiogram.” — Person #1
“Finding the right audio settings takes a lot of trial and error, especially since there seems to be a compromise between speech and music settings. Speech sounds very weird (like a snake talking)” – Person #2
“Traditional music appreciation outcomes for sensorineural hearing loss following cochlear implantation are generally poor. For conductive hearing loss requiring in-ear hearing aids, it’s better, like hearing people. The former deals with a nerve issue. And the cochlear implants aren’t good at telling the brain what music is.” – Person #3
Even through hearing loss differs across people, there’s a common need to tailor the music listening experience to one’s favored frequencies in an efficient and socially acceptable manner.
Utilize AI to quickly and reactively create customized audio equalizer profiles based on a listener’s hearing preferences and context.
Equalization — The process of adjusting the volume of different frequency bands within an audio signal.
Preferences — Which frequencies does the listener enjoy most or have trouble hearing?
Context — Where is the listener experiencing music? An outdoor concert, club, or just in their room?
This idea entails automatically generated audio descriptions that accompany song lyrics. Currently many streaming platforms provide lyrics to their songs. These lyrics however are limited in the sense that they only convey information about what is being sung. On the other hand, no accessible information is provided with regard to what else is happening musically. For example, audio descriptions may help to convey the tone of a song, volume or tempo changes, instrumentation, etc. This may perhaps be done through AI automatically generating audio descriptions based on different songs.
This idea includes ASL interpretation as a supplement to the current music streaming experience. While lyrics may be helpful for the hard of hearing community to understand what is being sung in a song, ASL interpretation offers another layer of meaning to music. Because of the creative and personal nature of ASL interpretation of music, it is able to convey emotion and tone that lyrics alone cannot. The creativity that goes into ASL interpretation however also makes it difficult to be automatically generated in a meaningful manner. This may imply that users who are fluent in ASL can upload song interpretations such that other users can view them while listening to a song.
User uploads their audiogram and the AI automatically determines an initial audio profile based on the upload. From there, the user can customize audio settings around frequencies, noise suppression, voice amplification, etc. Based on this initial screening, a series of more nuanced preset profiles intended for music are created automatically.
This idea is based off our needfinding insight that some people who are deaf deeply enjoy the lyrical component to music as much or even more so than the instrumentals/vibrations. These glasses would translate audio into lyrics such that a deaf person can follow along to the lyrics without having to look far off for a translator, or having to look at their phone at a concert which has low social acceptability.
Another insight we learned from needfinding is that some people really enjoy music videos which inspire a slew of feelings for people alongside lyrics. It would be game changing if music videos could be AI generated based off lyrics, especially since a lot of songs dont have accompanying music videos.
1. Uploading an audiogram & completing an initial hearing assessment
2. Using AI to automatically generate custom equalizer settings based on the context of the music being played
3. Creating custom settings from scratch.
Out of these tasks, our core user task that serves as our prototype’s main focus is the ability to auto generate equalizer settings using AI. Through this task, users are able to employ an AI assistant that detects information such as the genre/instrumentation of music, volume levels, and environmental context (e.g. whether the user is streaming music through headphones or at an outdoor concert) in order to optimize the user’s listening experience by quickly and automatically adjusting the settings of one’s assistive hearing devices.
We included the genre and environment of each audio profile on the home screen so users can quickly select the appropriate settings for their environment. Originally, audio profiles were simply displayed by their icon and title. We however envision audio profiles being created for very specific genre/environmental settings, therefore we updated the home screen to reflect the importance of these two contextual settings. We hope this will make it easier for users to quickly distinguish an appropriate audio setting.
The user is able to select from a wide variety of genres and environments. We included the following components to ensure accurate customization:
Questions we are hoping to answer with Prototype 1:
🔊 Always Listening Mode
🎛️ Audio Adjustments
📍Location
📲 UI Updates
ALWAYS LISTENING MODE:
We revised our prototype to include an “always listening mode” where TuneIn continuously gathers data about the environment via the phone’s microphone and then alters the current audio profile. We found this addition to the prototype to be important as a means of ensuring the app is reactive to changes in one’s environment. Users can swipe to quickly activate TuneIn AI’s help.
ADDITIONAL AUDIO ADJUSTMENTS:
Another change was made to our prototype’s equalizer. The equalizer originally only allowed for frequencies up to 1kHz to be manipulated, but has now been extended to 15 kHz. This change was made in accordance with traditional equalizer settings. While many instruments exist in the range of 200-600 Hz, cymbals, guitar effects, and some vocals often go beyond this range. Because of this, we altered our prototype’s equalizer to reflect the range of frequencies one would encounter while listening to music. Our original prototype included functionality for adjusting the equalizer, but we expanded upon it & created more options for fine-grained audio adjustment, including an additional setting for “Environment Base Volume” (above).
AUTOMATED LOCATION SETTINGS:
Another alteration was adding the option to set a location for an audio profile in addition to the current environment option. Because of acoustic differences between different concert venues, one’s home, the outdoors, etc. we found that location is another important contextual consideration when creating an audio profile. With the addition of this setting, we are hoping that audio profiles are more intuitive such that users are able to tailor them to settings that make sense to them. With this new setting, the app can also prompt users with suggested audio profiles based on their location – thus reducing the friction it may take for a user to find a location based audio profile.
UI UPDATES:
We added more labels with informative icons, and adjusted the UI to place more emphasis on the CTA tiles under “Create a new audio profile” while placing less emphasis on the individual audio profiles.In addition, we added more action buttons to the audio profile setting page, to allow for even more efficient user journey, and also added an action button to sort audio profiles on the home screen. We also added a method for users to quickly remove their active audio profile.Additionally, we wanted to guide our users to use the AI feature as opposed to manually entering their own audio settings. As a result, we changed the UI of the “Let TuneIn listen” button to take up more space and draw in the user.
🎨 Create
“You’re at a concert at Frost. How might you create a new audio profile for this scenario?”
“Why did you choose this option?”
☝🏼 Select
“You’ve just arrived at Bill Graham Civic Auditorium. How might you select an audio profile for this scenario?”
🎛️ Adjust
“You’re in your car listening to a funk song. How might you adjust the audio profile when the playlist switches to pop?”
THEY LIKED:
Calibrating settings with AI:
- “I REALLY like the idea! There is merit for an AI to calibrate the settings then adjust them automatically. [This] would solve the need of contacting an audiologist.” — B
Considering multiple listening experiences and acoustics:
- “Too often you kinda get a one solution fits all applications, but I like that this takes into consideration different acoustics.” - A
Automatically adjusting with Always Listening mode:
- “A lot of times you forget to update your settings when needed.” — A
- “Should just turn it on and forget about it.” — B
Having lots of options for customization
THEY GAVE FEEDBACK ABOUT:
Aiming for simplicity:
- “Simplicity is key; the more steps I take the less inclined I am to use it.“ — B
- “The application for my hearing aid has many more customization settings, but it can be very overwhelming. Sometimes it’s just too many things to customize or you find the one thing that you like and then you don’t touch it after that.” - A
Minimizing steps to 2 clicks:
“2 clicks and I’m out, I’m back at the concert.” — B
Testing with someone older
- “Before age 65, 1 in 10 have hearing loss but after age 65 it’s 8 in 10.” — B
Theme 1: TuneIn AI is an improvement over their current experience.
- Both participants opted for the AI generated audio profiles as opposed to creating them from scratch.
- Current solutions are “good enough” but this prototype is “great”.
- Overall, excited about this solution!
Theme 2: Balance between simplicity & customization.
- Current solutions take a lot of time & difficulty to find optimal settings.
- Our solution fills this gap & empowers users to edit settings efficiently.
- We should be careful to not cross this balance by adding too much complexity.
- Consider that the typical person with hearing loss is older & likely less tech-savvy
This process strengthened my ability to conduct user testing and extract the relevant information to improve our product with each prototype. This is the most robust prototyping process I have participated in and the final product is something that I am extremely proud of. Additionally, I learned a lot about building a design system and incorporating branding into the design process. Seeing this project through start to finish was an incredibly educational and rewarding experience. My greatest takeaway is the value of accessible design and learning how to incorporate accessibility into every aspect of UI/UX design.