Improving ServiceOntario Feedback Collection

Exploring feedback behaviours and creating design system components for gathering customer feedback.

Exploring feedback behaviours and creating design system components for gathering customer feedback.

Introduction

Inconsistent and fragmented feedback collection across ServiceOntario's online services makes it challenging for product owners gather actionable insights.


In January 2025, I joined a team at Ontario Digital Service focused on improving how user feedback is collected across ServiceOntario's online services. Through staff interviews and usability testing with end users, we explored when and how users are most likely to provide meaningful feedback. I designed UI components that could be embedded across different services to collect user feedback.

Inconsistent and fragmented feedback collection across ServiceOntario's online services makes it challenging for product owners gather actionable insights.


In January 2025, I joined a team at Ontario Digital Service focused on improving how user feedback is collected across ServiceOntario's online services. Through staff interviews and usability testing with end users, we explored when and how users are most likely to provide meaningful feedback. I designed UI components that could be embedded across different services to collect user feedback.

UX Researcher and Designer

My Role

Figma, Miro, Ontario Design System

Tools

4 months

Timeline

Overview

The Process

01/ Research

Stakeholder interviews with product owners

Jurisdictional scan

Survey best practices research

02/ Ideate

Brainstorming design approaches

Sketching wireframes

Mind mapping

03/ Design

Creating interactive prototypes on Figma

Prototype review with developers

Facilitating design critiques

04/ Test

Preparing user research scripts and test setups

Recruiting a diverse range of users in Ontario

Facilitating usability testing

Context

Understanding the needs of both public customers and product teams

We wanted to understand both public customers and product teams when collecting feedback. Customers need an easy way to share their feedback, while product teams need clear, useful information to make improvements.

User:

Product teams and executives

Those who use the feedback to understand what public users are experiencing.

User:

ServiceOntario customers (public)

Users who may use a service and want to provide feedback or rate the services.

Current experience

How is customer feedback used by product teams?

We spoke to 15 product teams to discuss common themes and pain points with feedback that is currently collected. These interviews helped build shared ownership of the solution and ensure we design with, not just for, the people using these insights.

Without direct input from staff and product owners, there is a risk that any redesigned feedback component may continue to generate data that is underused or not aligned with the needs of the primary decision makers.

Staff interviews

What we learned

Verbatim feedback is more actionable

Product teams found that open-ended verbatim feedback revealed deeper insights than 5-point rating scales. Teams shared that actionable verbatim survey feedback does ultimately inform product improvements including content changes, user interface (UI) changes, fixes to technical bugs, and holistic changes to the service.

Manual analysis of feedback is essential but time-consuming

Teams who do review the feedback have their own methods of analysis that range from informal reviews of weekly emails to extracting the data and coding of verbatim in excel. This process is time-consuming, and some product owners express the desire for feedback to be pre-categorized so they can easily and quickly identify top issues.

Desire for a better understanding of the user’s end to end journey

Product teams want more context around user feedback to pinpoint where issues occur, such as the specific step, trigger, or error involved. Currently, feedback they receive is often vague and only prompted at error screens, making it difficult to act on. Product owners emphasized that more timely and contextual input would make feedback more actionable.

How might we create a consistent, unified feedback experience across services that gives product teams the contextual, actionable insights they need to improve their services?

How might we create a consistent, unified feedback experience across services that gives product teams the contextual, actionable insights they need to improve their services?

Ideation & brainstorming

Exploring design solutions

We explored two approaches for collecting mid-transaction feedback:

Approach 1: Persistent feedback CTA throughout the transaction

Approach 2: Feedback CTA shown only at errors or exits

We chose to move forward with the persistent feedback approach to capture insights throughout the user journey, not just at known error points. This allows users to report unexpected issues mid-transaction as well as at error pages, supporting product owners’ need for a more complete view of users’ end-to-end experience.

We also brainstormed variations in how the persistent feedback component could look like:

Thumbs up/down: Is quick and easy. However, product owners value verbatim and binary feedback is not actionable.

Above the footer: This treatment is subtle enough but also stands out. Color variations can include light gray or blue.

Floating action button (FAB): Is a common UI pattern. However, on mobile, the FAB may block essential content on the screen.

In-page: A more prominent design placement suitable for hard stop error pages.

Floating action button (FAB): Is a common UI pattern. However, on mobile, the FAB may block essential content on the screen.

In-page: A more prominent design placement suitable for hard stop error pages.

We chose to test the feedback component above the footer for feedback throughout the transaction, and the in-page treatment for error pages. Our hypothesis is that the footer placement is subtle enough not to distract from the main task. The revised in-page placement for the error pages allows for the feedback to be more prominent at these screens, as we hypothesized that users will be more inclined to provide feedback at these points.

We chose to test the feedback component above the footer for feedback throughout the transaction, and the in-page treatment for error pages. Our hypothesis is that the footer placement is subtle enough not to distract from the main task. The revised in-page placement for the error pages allows for the feedback to be more prominent at these screens, as we hypothesized that users will be more inclined to provide feedback at these points.

User research plan

Preparing the research plan

The challenge: How can we test the feedback component with users in a way that captures their natural behaviours with providing online feedback….but without directly prompting them to give feedback?

How might we create a consistent, unified feedback experience across services that gives product teams the contextual, actionable insights they need to improve their services?

The challenge: How can we test the feedback component with users in a way that captures their natural behaviours with providing online feedback….but without directly prompting them to give feedback?

Idea: Create friction points that may elicit feedback

Idea: Create friction points that may elicit feedback

To try to uncover users' natural reactions and behaviours with the feedback component, we created friction points that may elicit feedback such as:

  • broken UI buttons

  • user input errors

  • technical errors

  • account creation issues

  • content errors

Usability testing

Rethinking how we understand feedback

We began to conduct usability testing using 4 existing online transactions: account creation, address change, health card renewal, and licence plate expiry check. The goal was to observe how users find and engage with feedback opportunities. However, we ran into an interesting problem:

Problem

During early testing, we noticed that many participants weren’t naturally engaging with the feedback component. We saw this as a reflection of real-world behaviour: many users don’t think to leave feedback online, especially when they’re focused on completing tasks.

Pivot

After the first few test scenarios, we started asking more direct questions about the feedback component (for example: "Would you want to share your thoughts with ServiceOntario?"), especially if a participant hadn’t interacted with it. We also added a questions at the end to learn about their past behaviours with feedback. This pivot gave us a more holistic view of how, when, and why users choose to share (or not share) feedback.

What we learned:

The different types of feedback providers

Our pivot allowed us to gain a more nuanced perspective of users' behaviours with providing online feedback. We analyzed our findings and we were able to identify different types of feedback providers. We created 4 user personas based on our analysis:

“I'm a believer in good digital products so I would share some feedback and say it's not working.”

Frequent feedback providers

Behaviours

  • Often have time to provide feedback, find it low effort and enjoy doing it

  • Would often provide feedback at the end of a transaction, once they’ve completed their task

Motivations

  • Want to help improve products for themselves, others or the organization through constructive feedback

  • Empathize with those who receive the feedback because they find feedback useful in their own work or experiences

  • May provide positive feedback to show appreciation

“If I had an okay perfectly normal experience I typically don't fill out [feedback]. If I had a bad experience I will fill it out. If had a really good one I will fill it out.”

Situation dependent feedback providers

Behaviours

  • Will share their thoughts in very positive or negative situations but less inclined to share for neutral experiences

  • More inclined to share when the opportunity to provide feedback is clear and simple

  • More inclined to share when they feel as though their feedback will be actioned/heard

  • May prefer to go in-person or contact instead of providing feedback to get their task completed

Motivations

  • Tends to share feedback when motivated by a desire to improve the service

  • Sharing feedback is dependent on their mood coming out of the interaction or service and their time

“They're doing something great…so why wouldn't someone provided feedback in those cases?”

Positive feedback providers

Behaviours

  • Will share their thoughts in very positive interactions

  • Prefer not to focus on criticism or complaints

Motivations

  • Want to let people and businesses know when they’re doing a good job

  • Empathize with the people receiving the feedback

  • Often appreciate receiving positive feedback in their own work

  • Are especially motivated if the feedback is about a human interaction

“I wouldn’t share feedback, because it typically doesn't give you instant help..I would click on Contact us.”

Hesitant feedback providers

Behaviours

  • May not share feedback due to time constraints

  • Hesitant to share feedback for minor (“petty”) issues such as aesthetics, content errors

Motivations

  • May prioritize contacting as they are task focused and believe feedback will not help them in a timely manner

  • Assume others have shared feedback for system errors and assume back end staff are aware of these issues

  • May believe that feedback will not be actioned/heard

  • Have had past experiences where their feedback was dismissed

We analyzed our findings and we found 4 different types of feedback providers.

The "So what"?

Many participants would only provide feedback if they had confidence the feedback would be heard.

Frequent feedback providers feel like feedback can make a difference and can help meaningfully improve a system. Situation-dependent feedback providers are more inclined to share when they feel as though their feedback will be heard.

So what?

Survey questions and checkbox options should be specific and tailored to the online service rather than being vague and generic, to make users feel confident that their feedback will be heard and taken seriously.

Lack of trust in the organization deters people from providing feedback.

Hesitant feedback providers expressed a lack of trust in government hearing and understanding their feedback due to past negative experiences.

So what?

Including cues like clear next steps (for example, clear help and contact information at errors), or even subtle language changes in the feedback CTA or submission messages can build trust. Trust is also built over time, through meaningful service improvements based on user feedback.

The decision to provide feedback is heavily dependent on time and availability.

Time dependency was mentioned by all types of feedback providers, especially by the situation-dependent feedback providers. For some frequent feedback providers, if they have the time and the opportunity to provide feedback is clear, they are highly likely to provide feedback.

So what?

The embedded survey feels more “immediate” and quick to complete compared to the survey that opens in a new tab.

When encountering an error troubleshooting and seeking help took precedence over feedback.

Most users would review and retry the transaction often blaming themselves for the error. In particular, hesitant feedback providers tend to only seek contact information (like email or social media) via “contact us” in the footer.

So what?

Prioritize quick access to help and contact information over feedback in error states and known hard stops.

Key findings

How did users react to the survey questions?

Survey question findings: Satisfaction

The satisfaction question made those with negative experiences feel more heard and validated.

One participant mentioned that the presence of the satisfaction question may motivate them to complete the open-text to describe why they were unsatisfied. The questions helped structure their feedback more than if just presented with open text.

...However some feel the question may not reflect more nuanced experiences.

Some found that the binary options of ‘yes’ or ‘no’ didn’t adequately capture nuanced experiences. They may have both positive and negative experiences during the transaction.

Survey question findings: Checkboxes

Checkboxes help participants structure their feedback.

Checkboxes help users decide what to do next:

  • if their issue was represented they would select it and possibly provide more detail.

  • if it was not represented they would describe the issue in the open text field.

Interestingly, checkboxes also create the perception of effective analysis on the backend and make people feel heard.

Participants expected their feedback to be heard and actioned when encountering checkboxes because the options felt more specific to their context so they had more confidence it was being reviewed.

Survey question findings: Open text

Many participants would still fill in the open text field to provide more context.

Even after checking off an issue, participants want to provide more details to help get the issue resolved. They would include or like to include details such as what they've done to try to resolve the issue (e.g. retrying, reloading the page, checking input information).

However, not receiving a reply was a deterrent to providing open text feedback.

Some expect a reply from filing out the feedback form and the phrase that “You will not receive a reply” would discourage them from providing feedback. Others felt this indicated nobody would review the feedback.

Outcome

Final revisions and designs

01/ Feedback placement throughout a transaction

This embedded survey approach supports product owners’ need for a clearer view of the user's end-to-end journey by enabling feedback collection at key moments throughout the transaction, not just at the end or at known error points. This approach can help uncover unknown issues and potentially help diagnose reasons behind drop-offs.

From a usability standpoint, the embedded approach maintains a stronger sense of continuity as users felt they hadn’t lost their place in the transaction. Users preferred being able to scroll up and reference the issue directly while filling out the survey, rather than toggling between tabs.


Filling out an embedded survey helps reduce cognitive load and making it more likely they would complete the feedback.


Other recommendations:

  • Customize issues checkboxes and include relevant or service-specific problems to build trust that feedback will be heard and productively actioned upon.

    Checkboxes also help product teams with categorization and analysis of user feedback.


  • The footer placement and color makes feedback visible and noticeable enough for those who need it, without being distracting for those who are uninterested in providing feedback. The CTA language is customized per service.


  • When asking ‘Are you satisfied?’ include a third option of ‘Somewhat’ to allow for more nuance in users’ feedback.


  • Although users appreciated the transparency of “You will not receive a reply,” they still want to feel like their feedback would be read by someone. We added language to clarify that feedback is being heard and reviewed by the team.

02/ Feedback placement at error screens

When encountering an error troubleshooting and seeking help took precedence over feedback. Many would not provide feedback when they encounter an error or hard stop until after they resolved their issue or only after attempting the transaction multiple times.


Therefore, the redesigned error screens prioritizes quick access to help and contact information over feedback.


We placed the option to provide feedback after contact and help information to deter users from inputting personal information in the open-text field.


We also recommended to collect technical information at hard stops and known errors and share it with product owners.

The in-page placement of the feedback CTA is more prominent and users are more inclined to look for feedback at these screens.

We placed the option to provide feedback after contact and help information to deter users from inputting personal information in the open-text field.


We also recommended to collect technical information at hard stops and known errors and share it with product owners.

Outcome

Final revisions and designs

The footer placement and color makes feedback visible and noticeable enough for those who need it, without being distracting for those who are uninterested in providing feedback. The CTA language is customized per service.

When asking ‘Are you satisfied?’ include a third option of ‘Somewhat’ to allow for more nuance in users’ feedback.

Customize issues checkboxes and include relevant or service-specific problems to build trust that feedback will be heard and productively actioned upon.


Checkboxes also help product teams with categorization and analysis of user feedback.

From a usability standpoint, the embedded approach maintains a stronger sense of continuity as users felt they hadn’t lost their place in the transaction. Users preferred being able to scroll up and reference the issue directly while filling out the survey, rather than toggling between tabs.


Filling out an embedded survey helps reduce cognitive load and making it more likely they would complete the feedback.

Although users appreciated the transparency of “You will not receive a reply,” they still want to feel like their feedback would be read by someone. We added language to clarify that feedback is being heard and reviewed by the team.

01/ Feedback placement throughout a transaction

This embedded survey approach supports product owners’ need for a clearer view of the user's end-to-end journey by allowing feedback collection at key moments throughout the transaction, not just at the end or at known error points. This approach can help uncover unknown issues and potentially help diagnose reasons behind drop-offs.

02/ Feedback placement at error screens

When encountering an error troubleshooting and seeking help took precedence over feedback. Many would not provide feedback when they encounter an error or hard stop until after they resolved their issue or only after attempting the transaction multiple times.


Therefore, the redesigned error screens prioritizes quick access to help and contact information over feedback.


We placed the option to provide feedback after contact and help information to deter users from inputting personal information in the open-text field.


We also recommended to collect technical information at hard stops and known errors and share it with product owners.

The in-page placement of the feedback CTA is more prominent and users are more inclined to look for feedback at these screens.

Outcome

Final revisions and designs

01/ Feedback placement throughout a transaction

This embedded survey approach supports product owners’ need for a clearer view of the user's end-to-end journey by allowing for feedback collection at key moments throughout the transaction, not just at the end or at known error points. This approach can help uncover unknown issues and potentially help diagnose reasons behind drop-offs.

From a usability standpoint, the embedded approach maintains a stronger sense of continuity as users felt they hadn’t lost their place in the transaction. Users preferred being able to scroll up and reference the issue directly while filling out the survey, rather than toggling between tabs.


Filling out an embedded survey helps reduce cognitive load and making it more likely they would complete the feedback.


Other recommendations:


  • Customize issues checkboxes and include relevant or service-specific problems to build trust that feedback will be heard and productively actioned upon.


  • Checkboxes also help product teams with categorization and analysis of user feedback.


  • The footer placement and color makes feedback visible and noticeable enough for those who need it, without being distracting for those who are uninterested in providing feedback. The CTA language is customized per service.


  • When asking ‘Are you satisfied?’ include a third option of ‘Somewhat’ to allow for more nuance in users’ feedback.


  • Although users appreciated the transparency of “You will not receive a reply,” they still want to feel like their feedback would be read by someone. We added language to clarify that feedback is being heard and reviewed by the team.

02/ Feedback placement at error screens

When encountering an error troubleshooting and seeking help took precedence over feedback. Many would not provide feedback when they encounter an error or hard stop until after they resolved their issue or only after attempting the transaction multiple times.


Therefore, the redesigned error screens prioritizes quick access to help and contact information over feedback.


The in-page placement of the feedback CTA is more prominent and users are more inclined to look for feedback at these screens.


We placed the option to provide feedback after contact and help information to deter users from inputting personal information in the open-text field.


We also recommended to collect technical information at hard stops and known errors and share it with product owners.

Reflection

Designing for trust in public services

One of the biggest takeaways from this project was understanding how much trust (or the lack of it) shapes whether people choose to give feedback at all. Some participants shared past experiences where their feedback to government services felt ignored or unresolved, which made them hesitant to give feedback again.

Instead of just focusing on how to collect feedback, our team had to consider how to signal that the feedback matters. This influenced everything from how specific the survey questions were, to how the component was framed and placed. While trust is built over time through action and meaningful improvements, this project showed me that thoughtful design can help set the tone for that relationship and make users more willing to provide feedback in the first place.

Explore my other work

*

University Health Network (UHN)

Improving the discovery of UHN clinical and research support

UX Design

Website Revamp

Information Architecture

Making it easier for clinicians and researchers to find the services and resources they need.

Leslie Dan Faculty of Pharmacy

Designing virtual eLearning modules for pharmacy education

UX/UI Design

Usability Testing

Coding

Creating interactive scenarios for students to explore pharmaceutical career paths.

Thanks for visiting.

Let’s connect!

Thanks for visiting.

Let’s connect!

Thanks for visiting.

Let's create together *。⋆

Thanks for visiting.

Let's create together *。⋆

Thanks for visiting.

Let's create together *。⋆

Improving ServiceOntario Feedback

Collection