Hiya Connect Usability Test


Role
Team lead
Project manager
Designer
Researcher
Context
Industry-sponsored master's project
Duration
Jan-Mar 2024
(3 months)
Skills
Heuristic evaluation
Evaluative research
User interviews
Team
2 researchers
3 designers
Overview
Summary
My team and I designed and conducted usability testing with call center employees on Hiya Inc.'s updated call analytics platform.
Task
The new design, launched in January 2024, was based on assumptions and released without testing.
Our goal was to identify any friction points in the interface by gathering participant feedback on reporting features, data visualizations, and overall dashboard navigation.
Outcome
After conducting our testing, we provided Hiya Inc. with 8 design recommendations to improve its platform.
Many of these recommendations were implemented by the design team within a few weeks of our presentation.
My Contributions
Project manager
Planned and documented backend procedures to ensure that tests ran smoothly. Managed and scheduled interviews via usertesting.com
Researcher & facilitator
Led development of research plan and study kit. Facilitated usability tests.
Designer
Led creation of 8 design recommendations which were delivered to Hiya Inc.'s research and design team.
Background
What is Hiya Inc.?
This project was sponsored by Hiya Inc., a B2B company focused on protecting users from unwanted and fraudulent calls while enhancing the overall quality of call interactions.
What is Hiya Connect?
Hiya Connect’s performance dashboard allows call center employees to assess their call performance and other metrics such as call volume and answer rate. With this dashboard, call center managers can:
Examine metrics within a certain time period.
Hover over each graph to look at individual data points.
See the calculated margin of error for further precision.

1
2
3
3
Hiya Connect Account performance page
Research planning
Research questions
While we wanted to test for the platform's overall usability, we wanted to specifically answer the questions below:
Can people confidently navigate the platform in order to run a report for a specific time frame and interval (weekly, monthly, etc)?
Do people understand how to interpret data visualizations?
Do people understand what "low call volume" means and the correct actions to take next?
Do people understand how margin of error impacts the confidence in their decisions?
Do people understand why they would use the dashboard versus performance analytics and vice versa?
Methodology
To answer these questions, we collected both qualitative and quantitative data.
1 Heuristic evaluation
To familiarize ourselves with the platform, we conducted a heuristic evaluation.
5 Cognitive walkthroughs
We conducted 5 cognitive walkthroughs via Zoom.
Likert scale questions
We measured platform difficulty, likelihood of use, and comparison to other analytics platforms.
Task success metrics
We recorded pass/fail criteria to evaluate task success.
Target audience
We used userinterviews.com to recruit English-speaking call center employees who don’t have prior experience with the Hiya Connect console.
Usability testing
Testing objectives
We wanted to evaluate participants' abilities to:

Navigate the platform

2. Use the calendar/date selector

Interpret data visualizations

Interpret pop-ups and banners
Scenarios
We created three scenarios (A, B, C) to walk each participant through. We alternated between starting with Scenario A and B for different participants. Scenario C was only implemented if participants hadn't independently discovered the margin of error toggle during the first two scenarios.
A.) Enough data

In this scenario, there is enough data for participants to interpret metrics and draw conclusions.
B.) Low call volume

In this scenario, there is not enough call data and a "low call volume" message will appear.
C.) Margin of error

If participants haven't discovered this toggle on their own, we will prompt them.
Tasks
Scenarios A and B involved the same 4 tasks:
Task #1
Run a report over a [specific time period].
Scenario A: 2 months
Scenario B: 5 weeks
Success criteria
Participant adjusts the time frame window to run a report for the specified time frame
Using the date picker to select a specified time period
Interpreting each metric's visualization
Task #2
Review each metric type.
Success criteria
Participants are able to successfully interpret the metrics and draw conclusions from it.
Task #3
Success criteria
Participants are able to locate and turn on Margin of Error toggle feature.

Margin of Error toggle currently turned Off
Margin of Error pop-up box
Task #4
Interpret Margin of Error toggle feature.
Success criteria
Users are able to successfully interpret the graph and corresponding data points
Scenario C: Margin of Error
We only went into Scenario C if the participant hadn't discovered the Margin of Error on their own.
Task #1
Turn on the Margin of Error option.
Success criteria
Participants provide observations and draw new conclusions about the data with the Margin of Error turned on.

Margin of Error toggle currently turned Off
Results
Analysis
After conducting 5 usability tests, we examined our notes and transcriptions to pull out memorable quotes and observations. We also calculated our Likert question results and task success rates.

Categorizing notes and feedback


Calculating Likert scale results and task success rates
Severity rankings
We designed 3 scenarios to evaluate participants' ability to:
High
High priority items prevent participants from successfully completing identified tasks and should be addressed immediately.
Medium
Medium priority items involve adding more information features.
Low
Low priority items are minor usability problems do not affect the overall use of the analytics platform.
I only go into findings with significant design recommendations. Click this link to view the full presentation.
Finding #1
Participants question the Margin of Error feature because it appears in some graphs and not others.

Margin of Error

Inconsistent indication of Margin of Error's presence
Evidence
3 out of 5 participants were confused by the Margin of Error's inconsistent presence.
There is no Margin of Error data/icon in the Estimated Call volume graph, but there are in the Answer Rate and Unique Answer Rate graphs. There are no icons in the Call Duration and Answer Rate per Attempt graphs, but there is Margin of Error data.
2 out of 5 participants initially noted that they did not notice any changes in the graphs.
2 participants didn't notice the Margin of Error graph or the data in the hover pop-up.
Quotes
“If I have Margin of Error for some and not always...it just causes more questions than gives answers.” (P4)
“I don’t see any change from what we had previously. It still looks the same.” (P5)
Recommendation
When the Margin of Error toggle is on, adjust the graph colors to that of the blue Margin of Error icon for increased visibility.
Original design
Design recommendation
Finding #2
Participants wanted more data on the graphs and multiple data visualization options.
Evidence
3 out of 5 participants wanted additional data visualization options beyond bar and line graphs, such as pie charts or scattergrams.
Some participants highlighted that various data visualization methods could more effectively cater to individual preferences and the specific purposes for which they intend to use the data.
One participant suggested incorporating auto-scaling for greater clarity on graph information and differences.
4 out of 5 participants wanted to produce more in-depth analysis through more data visualization elements.
For Answer Rate per Attempt, 1 participant suggested incorporating more information, such as time of day, whether they were unique calls, customer wait time, and customer demographics.
Quotes
“I was just trying to click to see if it gave me a further drill down or gave me any more information, but it didn't respond.” (P4)
“...some people like pie charts, some people like bar charts, some people like line charts. Some people like scatter grams. I mean, it all depends on what you're doing.” (P2)
Recommendation
Add a dropdown menu next to the graph’s informational section which allows participants to select different data visualization options.

Original design
Design recommendation
Finding #3
The pop-up windows had low discoverability at first glance.
A graph's pop-up window when hovered over
Evidence
Participants discovered the feature at different times.
Some participants only discovered the pop-up feature after engaging with the console for a few minutes.
4 out of 5 participants appreciated being able to see more information when hovering over the graphs.
1 participant did not engage with it at all.
Quotes
“As I hover over the graph, it actually does give me the numeric data I was looking for. I didn't realize that it did that as far as the dates so that piece is helpful.” (P4)
“Without hovering, it just gives you an overview, but if you want to really know…it can make a difference when reporting.” (P1)
Recommendation
Add a line of instruction under the date picker that lets participants know they can hover over all the graphs on the page to see specific data points.

Original design
Design recommendation
Outcome
Executive presentations
We presented these findings in front of senior designers and researchers at Hiya’s offices in downtown Seattle. Our design recommendations were well-received:
End of a great project!

Testimonial from industry sponsor
Reflection
Pilot tests
We ran 3 pilot tests prior to our scheduled interviews. This helped us identify and troubleshoot logistical and technical issues.
Thinking on my feet
Overall, our tests ran smoothly. Some interviews followed the script and scenarios closely, while others required us to adapt on the spot to address our research questions without being too transparent or awkward. It provided valuable interview practice, teaching us to actively listen and process participant responses while maintaining a natural flow.
Understanding test data
If we were to do this again, I would have had our team take more time at the beginning of the project to understand how the test data affected each visualization. Because we weren’t looking at real data, one of the graphs that were portrayed looked odd; this came up in one of our tests as we thought the user was struggling to interpret the visualization; however, it was actually the result of the test data.