Skip to content

Usability Test Analysis Template

Session Information

Participant ID: P__ Test Date: [YYYY-MM-DD] Facilitator: [Name] Feature/Workflow Tested: [Name] Session Duration: [XX] minutes Recording Location: [Link or file path]


Participant Profile

Role: [e.g., Senior Platform Engineer] Experience: [e.g., 7 years DevOps, 2 years with Fawkes] Platform Familiarity: [New User / Occasional / Regular / Power User] Tech Stack: [e.g., Java, Kubernetes, Jenkins] Team Context: [e.g., Works on core services team, 8-person squad]


Executive Summary

Overall Session Assessment: [2-3 sentences summarizing the key findings from this session]

Critical Issues: [Number] Major Issues: [Number] Minor Issues: [Number]

Top Insight from This Session: [The single most important thing learned]


Task Results

Task 1: [Task Name]

Goal: [What the user needed to accomplish]

Status: ☐ Success ☐ Partial Success ☐ Failure

Metrics:

  • Time to Complete: [X] minutes (Target: [Y] minutes)
  • Confidence Rating: [X]/5
  • Assistance Required: [None / Minor Hint / Significant Help]
  • Errors/Wrong Turns: [Number]

Task Flow:

  1. [First action taken]
  2. [Second action taken]
  3. [Third action taken]
  4. [Continue...]

Expected vs. Actual Path:

  • Expected: [Describe ideal path]
  • Actual: [Describe what user did]
  • Deviation: ☐ Followed expected ☐ Minor deviation ☐ Completely different

Observations:

  • What Worked Well:

  • [Observation 1]

  • [Observation 2]

  • What Didn't Work:

  • [Observation 1]

  • [Observation 2]

  • 🤔 Confusion Points:

  • [Where and why user was confused]

  • 😤 Frustration Moments:

  • [What caused frustration]

Direct Quotes:

"[Capture exact words - especially expressions of confusion, frustration, or delight]"

"[Another quote]"

Issues Identified:

Issue 1.1: [Brief Issue Title]

  • Severity: ☐ Critical ☐ Major ☐ Minor ☐ Enhancement
  • Description: [Detailed description of the problem]
  • User Impact: [How this affected the user's ability to complete the task]
  • Frequency: [First time seen / Seen in X previous sessions]
  • Evidence: "[Quote or description of behavior]"
  • Recommendation: [Suggested fix]

Issue 1.2: [Brief Issue Title]

[Repeat structure]

Post-Task Feedback:

  • "How did that feel?" - "[User's response]"
  • Most confusing aspect: "[User's response]"
  • What would make it easier: "[User's response]"

Task 2: [Task Name]

[Repeat the same structure as Task 1]

Goal: [Task objective]

Status: ☐ Success ☐ Partial Success ☐ Failure

Metrics:

  • Time to Complete: [X] minutes
  • Confidence Rating: [X]/5
  • Assistance Required: [None / Minor / Significant]
  • Errors/Wrong Turns: [Number]

[Continue with same structure as Task 1...]


Task 3: [Task Name]

[Repeat structure]


Overall Findings

Usability Metrics Summary

Metric Result Target Met Target?
Overall Task Success Rate % (/__) >80% ☐ Yes ☐ No
Average Task Completion Time __ min <__ min ☐ Yes ☐ No
Average Confidence Rating __/5 >4/5 ☐ Yes ☐ No
Ease of Use Rating __/5 >4/5 ☐ Yes ☐ No
Likelihood to Recommend __/5 >4/5 ☐ Yes ☐ No

Post-Task Interview Summary

Most Difficult Task: [Task name] Reason: [User's explanation]

Easiest Task: [Task name] Reason: [User's explanation]

Biggest Surprise: [Positive or negative surprise]

Missing Capabilities: [What user expected but didn't find]

One Thing to Change: [User's top priority improvement]

Additional Feedback:

  • [Other insights from post-task questions]
  • [Suggestions or comments]

Issue Catalog

Critical Issues (P0)

C1: [Issue Title]

  • Tasks Affected: [Task numbers]
  • Description: [Full description]
  • User Impact: Completely blocks [workflow/task completion]
  • Evidence:
  • Quote: "[User quote]"
  • Timestamp: [Video timestamp]
  • Behavior: [What was observed]
  • Frequency: [How many times in this session? Seen in other sessions?]
  • Recommendation: [Specific, actionable fix]
  • Priority Rationale: [Why this is critical]

[Repeat for each critical issue]

Major Issues (P1)

M1: [Issue Title]

  • Tasks Affected: [Task numbers]
  • Description: [Full description]
  • User Impact: Causes significant delay or frustration
  • Evidence: "[Quote]", timestamp [XX:XX]
  • Frequency: [Occurrence count]
  • Recommendation: [Suggested fix]

[Repeat for each major issue]

Minor Issues (P2)

m1: [Issue Title]

  • Description: [Brief description]
  • Impact: Minor confusion or inefficiency
  • Recommendation: [Suggested improvement]

[Repeat for each minor issue]


Behavioral Insights

  • Primary navigation used: [Which menus/paths user relied on]
  • Search usage: [Did they use search? Was it successful?]
  • Visual scanning: [What caught attention? What was overlooked?]
  • Mental model: [How did user conceptualize the workflow?]

Emotional Journey

  • Confidence at start: [High/Medium/Low]
  • Moments of frustration: [When and why]
  • Moments of delight: [When and why]
  • Overall sentiment: [Positive/Neutral/Negative]

Think-Aloud Quality

  • Quality: ☐ Excellent ☐ Good ☐ Fair ☐ Poor
  • Insights from narration: [Key thoughts revealed through thinking aloud]

Workarounds Observed

  • [Describe any creative workarounds user attempted]
  • [Note if workarounds were successful]

Key Quotes

Frustration/Confusion

"[Quote 1 - with context]" — During [Task X], when [context]

"[Quote 2]" — [Context]

Delight/Satisfaction

"[Quote 1]" — [Context]

"[Quote 2]" — [Context]

Mental Models/Expectations

"[Quote revealing what user expected]" — [Context]

"[Quote showing user's understanding]" — [Context]

Suggestions

"[Quote with user suggestion]" — [Context]


Positive Findings

What Worked Well:

  1. [Specific feature or interaction that was successful]

  2. Why it worked: [User feedback or observation]

  3. [Another positive finding]

  4. Why it worked: [Explanation]

  5. [Another positive finding]

Features Users Appreciated:

  • [Feature 1]: "[User quote or description]"
  • [Feature 2]: "[User quote or description]"

Patterns and Themes

Emerging Patterns (compare with other sessions):

  • [Pattern 1]: Seen in [X] of [Y] sessions
  • [Pattern 2]: Seen in [X] of [Y] sessions

Persona-Specific Observations:

  • [Observations unique to this persona/experience level]
  • [Different from other personas how?]

Technical Context Impact:

  • [Did user's tech stack affect their experience?]
  • [Did team context matter?]

Cross-Session Synthesis Notes:

  • Compare findings with other participants
  • Look for patterns across multiple users
  • Identify issues by frequency (1/8, 2/8, etc.)
  • Note differences between personas or experience levels
  • Flag issues for deeper investigation

Recommendations

Immediate Actions (P0)

  1. [Fix for Critical Issue C1]
  2. Rationale: [Why this must be fixed immediately]
  3. Estimated Effort: [Low/Medium/High]
  4. Owner: [Team or person]
  5. GitHub Issue: [Link if created]

[Repeat for each P0 item]

Short-Term Improvements (P1)

  1. [Fix for Major Issue M1]
  2. Rationale: [Why this should be fixed soon]
  3. Estimated Effort: [Low/Medium/High]
  4. Owner: [Team or person]

[Repeat for each P1 item]

Long-Term Enhancements (P2)

  • [List of minor improvements and enhancements]

Quick Wins

  • [Issues that are high impact but low effort]
  • [Can be addressed immediately]

Follow-Up Actions

  • [ ] File GitHub issues for all P0 and P1 problems
  • [ ] Share critical issues with team immediately
  • [ ] Add session data to aggregate metrics tracker
  • [ ] Update issue frequency counts
  • [ ] Add quotes to quotes library
  • [ ] Tag related issues in backlog
  • [ ] Update persona documents if new insights emerge
  • [ ] Schedule follow-up testing after fixes (if needed)

Session Metadata

Recording Details:

  • Video file: [Filename or link]
  • Duration: [XX:XX]
  • Key timestamps:
  • [XX:XX] - [Description of important moment]
  • [XX:XX] - [Description]

Data Storage:

  • Anonymized notes: [This file location]
  • Raw recording: [Secure location - not in Git]
  • Observation checklist: [File location]

Analysis Date: [YYYY-MM-DD] Analyst: [Name] Review Status: ☐ Draft ☐ Reviewed ☐ Final


Appendix

Task Success Criteria Reference

Task 1: [Name]

  • [ ] [Success criterion 1]
  • [ ] [Success criterion 2]
  • [ ] [Success criterion 3]

Task 2: [Name]

  • [ ] [Success criterion 1]
  • [ ] [Success criterion 2]

[Continue for all tasks]

Observer Notes

[Any additional notes from observer/note-taker not captured above]

Technical Issues

[Any technical problems during the session - platform bugs, test environment issues, connectivity problems]


Analysis Version: 1.0 Template Version: 1.0 Last Updated: December 2025 Owner: Product Team