Gamify AI speech bias education through kid-centered design
project intro
Role
UX design, visual design, content design, user testing
Timeline
4 weeks
with
1 developer/designer
tools
Figma, Illustrator, Hugging Face
Impact highlight
Only team (out of 5) that delivered beyond requirement
From image-to-text to speech classification
problem
We were one of the five teams tasked with this challenge and here are the constraints we faced:
Technical constraint 📦
Team constraint 👯♀️
Time constraint ⏱️
Testing constraint ⚙️
solution
01
Three distinct characters help children experience different types of AI speech bias—emotion, language, and accent—through voice-based inputs.
02
Children speak directly to each character and get immediate feedback from AI, making bias visible and learning hands-on.
03
After each interaction, kids are guided to think critically about the AI’s response and discover how bias may have influenced the outcome.
research
Before jumping into design, we leveraged key insights from lab's previous tests with 6–10-year-olds:
Insight 1 📊
Current research focused on image-to-text bias. We can explore other modalities.
Insight 2 🧠
Process should be easy to follow and balance learning and playing.
Insight 3 📏
Due to time, the design should work for age 6—then would also be usable for older kids.
Insight 4 🎨
Engaging visuals and interactive elements are key to engagement.
project Direction
After testing several HuggingFace APIs, we started with a speech emotion classifier that offered both accuracy and speed—providing a solid foundation for us to build upon.
Image-to-text → Speech classification
Speech-emotion classification API on HuggingFace
user flow & ideation
I began by mapping out the core learning flow, then distilled it into key information that shaped the structure of the key screen.
Iteration 1: Kid-centered design
👀 Lost focus easily on static texts
😕 Didn't know where to look at
🎮 Expected fun visual and interactive styles

💬 Dynamic, conversational guidance
🎯 Simple, intuitive interactions with hierarchy
🧱 Match aesthetics to popular kid games
Based on the feedback, I adjusted the visual hierarchy with elements like conversation bubbles, buttons, instructions to make the page more engaging.
I created a set of pixel cartoon characters that evolve visually based on the AI’s response. Inspired by popular kids’ media, I landed on a palette and style that felt lively, tech-forward, and accessible to ages 6–10.
At the end of each voice interaction, we added a pause to prompt kids to reflect on AI’s response, encouraging critical thinking about the process.
To make the experience more engaging, I scripted all character dialogue and trained AI voices to give each one a distinct personality—since we didn’t have time for human voiceovers.
Iteration 2: Usability
We showed our iterations to the researcher and tested a few more times. Then we made three major changes before launch.
Evaluation
We tested with three kids (ages 6, 8, and 10) through UW KidsTeam. The kids enjoyed the experience, learned through exploration, and gave us valuable feedback that we reported to the researchers.
🎮
The gamification made it easy and fun for kids to stay involved.
🎨
Kids really liked the visual as it relates to some of the trending games they play.
🤖
Kids were learning from tricking the AI into giving the answer they wanted.
🗣️
AI-trained voices lacked the expressiveness kids expected.
📚
Kids found the texts too long and no way to skip steps when starting over.
🙋
reflection