Content Review: From Design to Usability Testing
TL;DR
I designed the external review flow for ContentRockr, then tested it with 6 participants. Despite recruiting only experienced users (not the target "Rudi" persona), the study revealed critical issues: track changes failed to capture edits, comments couldn't be edited, and key context like the briefing was hidden. Quick fixes moved to development immediately through annotated screenshots while I continue working on larger redesigns.
Overview
ContentRockr is a SaaS platform for content creation, management, and validation. Following my earlier research on the text editor, I designed a new external review flow — the feature that lets content owners share articles with outside experts for feedback before publication. This study tested that design. The goal was to evaluate whether the interface I created was intuitive enough for reviewers who may rarely use the tool — people with deep subject expertise but limited technical comfort.
Project Details
Role: UX Researcher & Designer (solo)
Team: Worked with product owner and developer through weekly check-ins. This project required close collaboration with development — detailed annotations, specifications, and ongoing communication as findings were implemented in parallel with continued analysis.
Timeline: October 2025 – Ongoing
Tools: Dovetail (transcription, tagging), Figma (UI design, prototyping, affinity mapping), Zoom (remote sessions), Tally (screener survey), Calendly (scheduling), Fathom (session recording), Notion (documentation)
Problem Statement
Initial goal: Test the external review flow I re-designed — evaluate whether external reviewers can intuitively navigate the interface, provide feedback, and complete their review tasks without friction.
What we assumed going in:
- External reviewers are typically subject matter experts, not regular ContentRockr users
- They may have limited technical comfort and use the tool infrequently — or they may have used similar tools and have strong existing mental models for how such products work
What we knew going in:
- The interface needed to support core tasks: commenting, suggesting edits, adding/replacing images, and saving progress
What I set out to learn: How intuitive is the review flow for someone using it for the first time — regardless of whether they're familiar with similar tools or not?
The Re-Design
Following my earlier research on the text editor, I redesigned the external review flow. The previous version had several design problems:
- Cluttered entry point — The welcome modal appeared over a busy interface, showing internal details (Project, Code) before the reviewer even started
- Tab-based navigation — Content, SEO, and Briefing were separated into horizontal tabs, forcing users to switch views rather than keeping content front and center
- Briefing hidden in a tab — Critical context (purpose, audience, tone, deadline) was buried in a separate tab, not visible during the review
- No focused review canvas — Content was squeezed alongside metadata with no dedicated editing experience
- Exposed internal metadata — Publication details, target group, and buyer personas were visible to external reviewers who didn't need this information
- Basic email invitation — Lacked context about what reviewing involves
Users & Audience
Primary users: External reviewers — subject matter experts invited by content owners and other tool users to review and validate articles before publication. They typically don't have ContentRockr accounts and may use the tool only occasionally.
Target persona: "Rudi" — an older professional with deep domain expertise but limited technical comfort. This persona was established before I joined the project.
Study participants: 6 participants, none of whom had used ContentRockr before. All had experience with editing and collaboration tools (Google Docs, Word, track changes). Backgrounds included:
- CRM/data specialist with 10+ years in digital publishing
- Freelance graphic designer (10 years experience)
- Two marketing specialists in EdTech and B2B (3–6 years)
- Freelancer community founder (6–7 years)
- Research project manager (11 months in role)
Recruitment challenge: We aimed to recruit two groups — participants matching the "Rudi" persona (limited technical comfort) and more experienced users. We were unable to recruit any Rudi-profile participants. As a solo researcher balancing research, design, and implementation, I had limited capacity to expand recruitment efforts. The participants we did recruit all fell into the more technically comfortable group.
Research Process
This study followed a structured approach: planning the methodology and preparing materials, recruiting participants through a screener survey, conducting remote moderated sessions, and synthesizing findings through tagging and affinity mapping. Below is a detailed breakdown of each phase.
Key Findings
The research revealed critical usability issues that fall into two categories: fundamental functionality problems that break the review workflow, and discoverability issues where users couldn't find features or didn't understand their purpose. Below are the highest-severity findings based on frequency (how many participants encountered it) and impact (how much it disrupted their task).
Design Response
The findings are being addressed in phases. I identified quick fixes that could be implemented immediately without redesign, and communicated these directly to the developer through annotated screenshots. While those are being implemented, I'm working on the more complex issues that require deeper design work.
I approached prioritisation by separating issues into two tracks:
- Quick fixes — Small changes that don't require redesign: bugs, label changes, icon corrections, interaction tweaks. These can be implemented quickly while I work on larger issues.
- Redesign required — Complex issues that need new design solutions: comments functionality, header image flow, track changes display.
Limitations
The study had several constraints that may have affected the breadth of insights:
Outcomes & Impact
This project is ongoing as of December 2025. This iteration has not yet been released to users, so there are no tangible business metrics or conversion data to report. The outcomes below reflect the immediate impact of the research on the product development process.
What I Learned
This project reinforced that research is messy, non-linear, and full of surprises. Below are reflections on what I'd do differently and growth moments from this study.
Key Takeaway
Usability testing of the external review flow I designed revealed that experienced users — comfortable with tools like Google Docs and Word — still struggled with core functionality: track changes didn't capture edits reliably, comments couldn't be edited, and key context was hidden. If experienced users hit these walls, less technical users would struggle more. The research enabled immediate action through annotated dev handoffs while deeper redesign work continues in parallel.