PROJECTS         CONTACT



  

*** Picture not being aligned with the system picture, due to sensitive nature of information which may against NDA

Healthcare system usability testing: Contextual Interview  


Duration           three Months  
Ownership       Chao-LIng Chyou

Categories      UX Research
Contribution   Contextual Interview, Usability Testing 


Eval system is a platform which hold by Silicon Valley healthcare start-up, a product which is a child development evaluation system which included questionairre with five categories of questions based on AI recognition. This usability testing is specifically testing partial flow of eval system especially questionnaire part.   



Why doing this testing?


  • To solve the problems of high bounce rate in questionnaire part of flow 
  • To dig out the critical errors which may have an influence on user jumpingo out
  • To evaluate whether questionnaire system flow and feature aligned with people’s behaviours


How we do testing?


Use usability testing which involved into completion, time on-tasks to dig out the issues beyond the eval system.

Measurements to be collected:

Suggestions and improvements of flow/ functions in questionnaire part
Assessment: accomplishment rate of questionnaire part


Who is the tester? 


Six couples (two in Taiwan, two in China and two are the U.S)

  • No experience in using eval system
  • User don’t have background professional knowledge toward language therapist 
  • Their child would be aged between 2-6(below 2 is unsuitable)


How to recruit the tester?


Taiwan

  • Parenting platform
  • Parenting forum

China 

  • Parenting 
  • Zhi hu platform
  • Wechat group

U.S

  • Referrals from colleagues


Usability testing environment


Silent environment without any sound and disruption also need to be at the user’s home as being delivered the unbias and fair result. Used website or tablet only.


Usability roles


Test Observer

  • Chao-Ling Chyou

Test Observer

  • six users

Data Loggers

  • Marketing team members

What is metric of usability testing?

Scenario Completion

  • Completion of task (Successfully or unsuccessfully)

Critical Errors

  • Confusion points
    collection

Scenario Completion
(Time on Tasks)


Goal of usability testing

Completion Rate

  • The completion without any errors whilst testing.

    The completion rate of [50%/16.7%] is the goal of each task in this usability test.

Time-on-time

  • The time to complete a scenario is referred to as "time on task".  It is measured from the time the person begins the scenario to the time he/she signals completion.

Non-Critical errors

  • The errors will not have influence toward users completed the tasks.

Critical errors

  • The errors will have influence toward users completed the tasks.


Usability testing Task


  • Pre-set up: 15 mins instructions and help participants log in to questionnaire part
  • Task: Within 30mins, participants will take questionnaire part, which included five categories of speaking language clarification with their child development.


Problem and Severity


Impact

Impact is the ranking of the consequences of the problem by defining the level of impact that the problem has on successful task completion.

There are three levels of impact:

• High - prevents the user from completing the task(critical errors)

• Moderate - causes user difficulty but the task can be completed(critical errors)  

• Low - minor problems that do not significantly affect the task completion(non-critical errors)



Frequency


Frequency is the percentage of participants who experience the problem when working on a task.



• High: 30% or more of the participants experience the problem

• Moderate: 11% - 29% of participants experience the problem

• Low: 10% or fewer of the participants experience the problem



Testing Results - observing


At the beginning of questionnaire,  83.3% of participants zoom out the interfaces and filling in the questionnaire.

In 15 mins, 50% of participants started felt confusion about questionnaire, slowing down their path.


In 20 mins, 66.7% of participants, will started felt tired of filling in many questions and felt too overwhelming and asking the how many questions did they need to fill in.



Result Analysis from metric



Completion rate  (Unsuccessful rate is 83.3%)

Only 16.7% participants finishing questionnaire flow within 30 mins without any errors.


Time on time

16.7% participants completed in 30 mins
16.7.% of participants  completed in 60 mins
66.7% of participants completed over 60 mins


Errors

User had figured out some of errors and confusion:

  1. Fonts are too small(83.3%)(critical errors)
  2. The amount of questions are too overwhelming(66.7%)(critical errors)
  3. Confusion with questionnaire progress and where they are(50%)(critical errors)
  4. Icons are too small to recognise(16.7%)(non-critical errors)
  5. Attempt to change the previous answers(16.7%)(non-critical errors)
  6. Colours with green buttons are bothered with noticing information(16.7%)(non-critical errors)



Recommendations of improvements

Completion Rate
Fonts are too small


  • Topic font size from 10pt to 14pt
  • Content font size from 8pt to 10pt

The amount of questions are too overwhelming

  • Before: Questions randomly showcased based on AI previously

  • After: Each page only showcase maximum 6 questions(discussion with AI enginner)

Confusion with questionnaire progress and where they are at

  • Before: No Progress bar


  • After: Showing five categories progress with bar with line and colours


Recommendations of improvements



Pre-vamped version eval system landing page


*** Picture not being aligned with the system picture, due to sensitive nature of information which may against NDA

Re-vamping version eval system questionnaire page



︎ ︎  ︎
Copyright © 2024   CHAO-LING CHYOU            LONDON,UNITED KIMDOM