Innovating an AI-assisted video tool to help curators clip and shape stories
project intro
Collaborating with Edmonds Historical Museum (EHM), I led the design of an AI-assisted video clipping tool that enables curators to transform long oral history videos into short, context-rich clips. This helped the museum share more archive stories with the public in an engaging way while reducing curators’ editing workload.
Role
UX design, visual design, user testing
Timeline
3 months
with
1 designer
tools
Figma
Impact highlight
Client praised the tool’s innovation and its potential to scale across small museums
By delivering a package with an interactive demo, design toolkit, and implementation guide
problem
Oral history videos from Edmonds Historical Museum
Oral History is the collection and study of historical information using sound recordings of interviews with people having personal knowledge of past events.
Oral history interviews are often hours long and contain rich context and personal narratives. But most of this content stays buried in museum archives, largely because curators struggle to edit clips while retaining narrative flow and historical integrity.
solution
01
After uploading a video, curators begin in a transcript view, where they can skim and select segments with help from AI-suggested highlights.
02
Curators arrange and refine the selected clips with support from narrative-aware AI suggestions, making the final product audience-ready.
03
Curators can revisit any chosen clips at any point to delete, add or adjust, giving them flexibility to refine the story structure as they go.
Discovery
Current curator workflow
When we started the project, museum leadership was also interested in how AI in a new tool can support curators. We interviewed the curator at EHM and reached out to other small museum professionals to map workflows and identify friction points.
⏰
Time-intensive scanning
They begin editing with a theme in mind and scan transcripts for relevant content
🫥
Disconnected tools
Existing tools are spread across platforms and too complex for non-technical users
🔮
Loss of narrative flow
Context often gets lost when manually clipping, making stories feel fragmented
🤖
Low AI familiarity
AI is intriguing, but curators are unfamiliar with its capabilities and limitations
Opportunity 1
Helping curators find relevant content faster by theme
Opportunity 2
Helping curators preserve storytelling context and narrative flow in final outputs
Iteration 1: Flow
Going forward with our two opportunities areas, we decided to build two innovative features integrated in the user flow:
Text-based transcript scanning and content selection
Narrative identifying and refinement in selected content
Other similar tools put users straight into a video canvas after they upload a video, so we tried similar style. I also proposed a two-page flow to better match curators’ current flow. We worried this break from convention might confuse users, but testing showed that splitting reduced cognitive overload on every page and made navigation feel easier and more natural.
Curators tend to miss right-side tabs in one-page editors, as they were used to linear workflows
Having transcript, video, and editing tools on one screen felt cluttered and distracting
The separate-page flow matched how curators naturally move through tasks, one stage at a time
Separating steps let curators focus on transcript review first, then narrative building, without feeling rushed
Iteration 2: Design for AI
In the first step, we explored different levels of AI involvement, including a multi-turn dialogue vs. a single-turn search bar with more user autonomy. After testing, we found that users preferred a simpler approach.
I found that the horizontal bar appeared too minor and not scalable for more contents so I proposed a vertical layout, which was the final layout we went with, and struck a better balance: flexible and visible enough, yet unobtrusive if they wanted to work manually.
More conversations with curators made us realize that they didn’t just want suggestions—they wanted to understand why afterwards. Therefore, I introduced a card that briefly explains why a clip was chosen. This small addition made a big difference in transparency and trust.
The second step had AI identifying contextual information loss during video clipping, something innovative that we were playing around with. Inspired by Grammarly, I tried grouping AI suggestions and displaying solutions upfront, assuming it would speed up edits for users.
Testing showed some surprises: categories felt jargony, and showing solutions so early on reduced control and trust. I redesigned the cards to show the issue first, offer three simple solutions to choose from, and let curators expand for AI help only if they wanted.
Labels felt too jargony, and showing AI solutions upfront removed curator autonomy
Curators gained more control, clarity and trust by revealing AI suggestions only after selecting a solution path
Iteration 3: Refine Interactions
During the design process, I also led several detailed refinements to make the tool feel more intuitive. An example was to improve the interaction of how users see and adjust overlay lengths within the text editing zone. I played with different styles and we went with the version that was visually the least distracting.
Impact
The project earned strong recognition from EHM’s leadership for its innovation and potential to scale across small museums. Our delivery of an interactive demo, design toolkit, and implementation guide also strengthened the museum’s grant application.
📈
Quantitative
🚀
Qualitative
reflection