Redefining online feedback collection for ServiceOntario
My Role
Researcher and
UX Designer
Team
Project lead
UX Design co-op (me)
Tools
Figma, Miro, Ontario Design
System
Timeline
4 months
What did I work on?
I was an Experience Design Intern at Ontario Digital Service, where I worked on improving how user feedback is collected across its digital services.
I conducted stakeholder interviews with stakeholders and usability testing with end users to understand when and how people are most likely to leave meaningful feedback. I co-designed a set of reusable UI components that can be embedded across various services, helping product teams gather more consistent and actionable insights.
Overview
Our design process
01/ Research
Stakeholder interviews with product owners
Jurisdictional scan
Survey best practices research
02/ Ideate
Brainstorming design approaches
Sketching wireframes
Mind mapping
03/ Design
Creating interactive prototypes on Figma
Prototype review with developers
Facilitating design critiques
04/ Test
Preparing user research scripts and test setups
Recruiting a diverse range of users in Ontario
Facilitating usability testing
Who is this "for"?
Understanding the needs of both public customers and product teams.
Customers need an easy way to share their feedback, while product teams need clear, useful information to make improvements.
The current experience
How is customer feedback currently used by product teams?
We spoke to 15 product teams to discuss common themes and pain points with feedback that is currently collected. These interviews helped build shared ownership of the solution and ensure we design with, not just for, the people using these insights.
Without direct input from staff and product owners, there is a risk that any redesigned feedback component may continue to generate data that is underused or not aligned with the needs of the primary decision makers.
Stakeholder interviews
What we learned
Verbatim feedback is more actionable
Product teams found that open-ended verbatim feedback revealed deeper insights than 5-point rating scales. Teams shared that actionable verbatim survey feedback does ultimately inform product improvements including content changes, user interface (UI) changes, fixes to technical bugs, and holistic changes to the service.
Manual analysis of feedback is essential but time-consuming
Teams who do review the feedback have their own methods of analysis that range from informal reviews of weekly emails to extracting the data and coding of verbatim in excel. This process is time-consuming, and some product owners express the desire for feedback to be pre-categorized so they can easily and quickly identify top issues.
Desire for a better understanding of the user’s end to end journey
Product teams want more context around user feedback to pinpoint where issues occur, such as the specific step, trigger, or error involved. Currently, feedback they receive is often vague and only prompted at error screens, making it difficult to act on. Product owners emphasized that more timely and contextual input would make feedback more actionable.
Ideation & brainstorming
Exploring various design concepts through collaborative brainstorming sessions
We explored two different approaches to collecting feedback during a transaction, and created mid-fidelity prototypes to gather early input from our internal team and stakeholders.
Approach 1: Persistent feedback throughout the transaction
Approach 2: Feedback CTA shown only at errors or exits
We chose to move forward with the persistent feedback approach to capture insights throughout the user journey, not just at known error points. This allows users to report unexpected issues mid-transaction as well as at error pages, supporting product owners’ need for a more complete view of users’ end-to-end experience.
We also brainstormed variations in how the persistent feedback component could look like:
Thumbs up/down: Is quick and easy. However, product owners value verbatim and binary feedback is not actionable.
Above the footer: This treatment is subtle enough but also stands out. Color variations can include light gray or blue.
User research plan
Preparing our user research plan to test our designs
Idea: Create friction points that may elicit feedback
To try to uncover users' natural reactions and behaviours with the feedback component, we created errors such as:
broken UI buttons
user input errors
technical errors
account creation issues
content errors
Usability testing
During early testing, we had to pivot our approach.
Observation
During early testing, we noticed that many participants weren’t naturally engaging with the feedback component. We saw this as a reflection of real-world behaviour: many users don’t think to leave feedback online, especially when they’re focused on completing tasks.
Pivot
After the first few test scenarios, we started asking more direct questions about the feedback component (for example: "Would you want to share your thoughts with ServiceOntario?"), especially if a participant hadn’t interacted with it. We also added a questions at the end to learn about their past behaviours with feedback. This pivot gave us a more holistic view of how, when, and why users choose to share (or not share) feedback.
What we learned:
From our analysis, we noticed different types of feedback providers.
“I'm a believer in good digital products so I would share some feedback and say it's not working.”
Frequent feedback providers
Behaviours
Often have time to provide feedback, find it low effort and enjoy doing it
Would often provide feedback at the end of a transaction, once they’ve completed their task
Motivations
Want to help improve products for themselves, others or the organization through constructive feedback
Empathize with those who receive the feedback because they find feedback useful in their own work or experiences
May provide positive feedback to show appreciation
“If I had an okay perfectly normal experience I typically don't fill out [feedback]. If I had a bad experience I will fill it out. If had a really good one I will fill it out.”
Situation-dependent feedback providers
Behaviours
Will share their thoughts in very positive or negative situations but less inclined to share for neutral experiences
More inclined to share when the opportunity to provide feedback is clear and simple
More inclined to share when they feel as though their feedback will be actioned/heard
May prefer to go in-person or contact instead of providing feedback to get their task completed
Motivations
Tends to share feedback when motivated by a desire to improve the service
Sharing feedback is dependent on their mood coming out of the interaction or service and their time
Positive feedback providers
Behaviours
Will share their thoughts in very positive interactions
Prefer not to focus on criticism or complaints
Motivations
Want to let people and businesses know when they’re doing a good job
Empathize with the people receiving the feedback
Often appreciate receiving positive feedback in their own work
Are especially motivated if the feedback is about a human interaction
“I wouldn’t share feedback, because it typically doesn't give you instant help..I would click on Contact us.”
Hesitant feedback providers
Behaviours
May not share feedback due to time constraints
Hesitant to share feedback for minor (“petty”) issues such as aesthetics, content errors
Motivations
May prioritize contacting as they are task focused and believe feedback will not help them in a timely manner
Assume others have shared feedback for system errors and assume back end staff are aware of these issues
May believe that feedback will not be actioned/heard
Have had past experiences where their feedback was dismissed
Key findings
How did people react to the survey questions?
The satisfaction question made those with negative experiences feel more heard and validated.
One participant mentioned that the presence of the satisfaction question may motivate them to complete the open-text to describe why they were unsatisfied. The questions helped structure their feedback more than if just presented with open text.
...However some feel the question may not reflect more nuanced experiences.
Some found that the binary options of ‘yes’ or ‘no’ didn’t adequately capture nuanced experiences. They may have both positive and negative experiences during the transaction.
Checkboxes help participants structure their feedback.
Checkboxes help users decide what to do next:
if their issue was represented they would select it and possibly provide more detail.
if it was not represented they would describe the issue in the open text field.
Interestingly, checkboxes also create the perception of effective analysis on the backend and make people feel heard.
Participants expected their feedback to be heard and actioned when encountering checkboxes because the options felt more specific to their context so they had more confidence it was being reviewed.
Many participants would still fill in the open text field to provide more context.
Even after checking off an issue, participants want to provide more details to help get the issue resolved. They would include or like to include details such as what they've done to try to resolve the issue (e.g. retrying, reloading the page, checking input information).
However, not receiving a reply was a deterrent to providing open text feedback.
Some expect a reply from filing out the feedback form and the phrase that “You will not receive a reply” would discourage them from providing feedback. Others felt this indicated nobody would review the feedback.
Putting it all together…
Then, we asked: "So what"?
Many participants would only provide feedback if they had confidence the feedback would be heard.
Frequent feedback providers feel like feedback can make a difference and can help meaningfully improve a system. Situation-dependent feedback providers are more inclined to share when they feel as though their feedback will be heard.
So what?
Checkbox options should be specific and tailored to the online service rather than being vague and generic, to make users feel confident that their feedback will be heard and taken seriously.
Lack of trust in the organization deters people from providing feedback.
Hesitant feedback providers expressed a lack of trust in government hearing and understanding their feedback due to past negative experiences.
So what?
Including cues like clear next steps (for example, clear help and contact information at errors), or even subtle language changes in the feedback CTA or submission messages can build trust.
Trust is also built over time, through meaningful service improvements based on user feedback.
The decision to provide feedback is heavily dependent on time and availability.
Time dependency was mentioned by all types of feedback providers, especially by the situation-dependent feedback providers. For some frequent feedback providers, if they have the time and the opportunity to provide feedback is clear, they are highly likely to provide feedback.
So what?
The embedded survey feels more “immediate” compared to the survey that opens in a new tab.
When encountering an error troubleshooting and seeking help took precedence over feedback.
Most users would review and retry the transaction often blaming themselves for the error. In particular, hesitant feedback providers tend to only seek contact information (like email or social media) via “contact us” in the footer.
So what?
Prioritize quick access to help and contact information over feedback in error states and known hard stops.
Supporting both product teams and customers
The embedded feedback survey approach gives product owners a clearer view of the full user journey—not just at errors or the end—and users mention that this approach lets them stay focused and provide feedback without losing their place.
Reflection
What I've learned
Thoughtful design can build trust
One of the biggest takeaways from this project was understanding how much trust (or the lack of it) shapes whether people choose to give feedback at all. Instead of just focusing on how to collect feedback, our team had to consider how to signal that the feedback matters. This influenced everything from how specific the survey questions were, to how the component was framed and placed. While trust is built over time through action and meaningful improvements, this project showed me that thoughtful design can help set the tone for that relationship and make users more willing to provide feedback in the first place.
Expect the unexpected in usability testing
Usability testing doesn’t always go as expected, and that’s a good thing! I learned that people often approach tasks in ways I didn’t anticipate, and that the most valuable insights come when I let go of my own assumptions and truly listen. I learned to stay flexible, ask better questions, and pivot my approach when necessary. Some of the "unexpected" or surprising sessions also surfaced the most valuable insights.
Explore my other work
Improving discovery of UHN clinical and research support
UX Design
Website Revamp
Information Architecture
Making it easier for clinicians and researchers to find the services and resources they need.
Designing virtual eLearning modules for pharmacy education
UX/UI Design
Usability Testing
Coding
Creating interactive scenarios for students to explore pharmaceutical career paths.