Difference between revisions of "UsabilityCluster/Usability Test Framework"

From Earth Science Information Partners (ESIP)
Line 54: Line 54:
 
** <div style="">Don’t read the test user a script here, just have a conversation. In other words, it is important to rehearse and pilot-test the introduction because it is helpful to be as natural as possible when interacting with your test users. If you read them a script, they will feel more like they are being studied. Keep things as natural and comfortable as possible.</div>
 
** <div style="">Don’t read the test user a script here, just have a conversation. In other words, it is important to rehearse and pilot-test the introduction because it is helpful to be as natural as possible when interacting with your test users. If you read them a script, they will feel more like they are being studied. Keep things as natural and comfortable as possible.</div>
  
=== Section 3 of 6: Pre-test survey (Use example questions from DataONE survey and other member’s surveys.) ===
+
=== Section 3 of 6: Pre-test survey ===
  
 
* <div style="">Purpose of this Section:</div>
 
* <div style="">Purpose of this Section:</div>

Revision as of 18:17, November 6, 2018

Section 1 of 6: Inviting Users to Participate in the Study

  • Purpose of this Section:
    • The information from this section could be used for identifying/locating users for the user studies.
  • Information Applicable for this Section:
    • Sample message for soliciting users:


Dear [person’s name],
[First sentence should be something that allows the person to recognize the organization, or a particular person that is familiar to them to catch their interest and let them know that this isn’t spam.] We are conducting a study of [product] to better understand [how to organize information, how users interact with the system, etc. Why are you conducting the usability study - very broad statement].  [Provide and explanation of the product, if necessary (1-2 sentences that tells the person what the product is)]. Your feedback will help us to improve this product.
During the study you will be asked to complete a number of tasks using [the product] and provide feedback about your experience. The study will take approximately [time frame (see section further down about appropriate test times)] and will run from [date range]. The study will take place [virtually OR (insert location)]. To thank you for your time, we have [insert the incentive (e.g. $5 gift card) (This is optional)].
If you are interested in participating, please [indicate how they can reply] and we/I will provide you with more information.


  • Tips:
    • Keep the introductory email short.
    • Personalize the email and provide references that the users could relate to, such as an organization or a contact information. Name recognition could help to draw users in.

Section 2 of 6: Test Introduction

  • Purpose of this Section:
    • The information from this section could be used for the following:
      • To explain the process of the user study on the day of meeting with the users.
  • Information Applicable for this Section:
    • Consent to participate in the testing:
      • Possible steps:
        • Step 1: Verify if formal informed consent is necessary for your organization (e.g Internal Review Board - IRB)
        • Step 2: Document any applicable waiver for consent.
        • Step 3: If a consent is required, confirm with the IRB if a consent script would be sufficient. If yes, also ensure whether a waiver for a signed informed consent form is required. An example of consent script and can be viewed here.
        • Step 4: If a consent script is not sufficient and a formal consent form is required, then check with your IRB if there is an official form that should be used. In general, an example of a consent form can be viewed here.
        • Additional references/recommendations per the U.S. Department of Health & Human Services has been summarized and provided by Dr. Robert Downs of the Center for International Earth Science Information Network (CIESIN) at Columbia University, which can be viewed here.
    • Additional areas to discuss with users:
      • Want users to be as natural as possible.
      • Repeat what we’re testing and more information about the product, if necessary.
      • Stress that there are no right or wrong answers. We are testing the system not the user. The user can’t do anything wrong.
      • Go with users’ first reaction and don’t think through things too much. Have some sort of time limit to keep moving things along. Tell them “you will have [time] to complete each task and at the end of that time, I will ask you to move on” “You are welcome to end each task at anytime.” “Go with your first reaction and don’t think through things too much”
      • Users can stop the usability test at any time.
      • If you are recording, to let them know and make sure they are okay with it.
      • Tell them how/where the responses will be used. (e.g. if you will be using responses in a publication, giving them to developers, etc.)
      • If they are on the web, bookmark the homepage, so if they do get lost they can get back to where they started. Let them know how to get back to the homepage.
  • Tips:
    • Don’t read the test user a script here, just have a conversation. In other words, it is important to rehearse and pilot-test the introduction because it is helpful to be as natural as possible when interacting with your test users. If you read them a script, they will feel more like they are being studied. Keep things as natural and comfortable as possible.

Section 3 of 6: Pre-test survey

  • Purpose of this Section:
    • The information from this section would be used to collect information about the test user and establish benchmark for the user’s experience with the system-under-test or with similar systems.
    • The information from this section will be able to help:
      • Prioritize the usability issues found during tests.
      • Understand the differences in test results.
      • Provide user’s context.


  • Information Applicable for this Section:
    • Recommended:
      • We recommend collecting information about the user’s demographic information and experience with the system-under-test.
      • Sample questions that could be used are shown below:
        • What is your field of study? (fill in the blank) (Possibly include link to RDA vocabulary for the suggested values)


Examples for presenting the above questions with predefined values:
research
program management/project management
data management
metadata creating
database administration
ecological modeling
policy making
field work
application development


Other, please specify here:
  • How often do you [use a system that is presented by the system-under-test]?
    • 1 to 5 (Likert scale) 1 being least 5 being most.
  • How would you rate your experience level with [the system that is presented by the system-under-test]?
    • 1 to 5 (Likert scale) 1 being least 5 being most.
  • How often do you [perform the function/task that is being tested]?
    • 1 to 5 (Likert scale) 1 being least 5 being most.
  • How would you rate your experience level with [the function/task]?
    • 1 to 5 (Likert scale) 1 being least 5 being most.
  • Optional:
    • Additional questions for collecting user’s reasons/interests in and expectation with the system-under-test could be considered.
    • Sample questions that could be used are shown below:
      • What are your key job duties?
      • Why are you interested in participating in this test?
      • Is there anything you would like to learn from the experience?
      • Have you used other systems that are similar to the system-under-test?


  • Tips:
    • Please note that free text questions can be time consuming for users to answer, so you should make sure that the questions add value to the survey.
    • Additionally, it is more difficult and time consuming to analyze free text questions, so free text questions should be considered carefully before including in the pre-test survey.


*Note: We will need to add references here. Do people already have recommendations?

Section 4 of 6: Usability test itself


Section 5 of 6: Post-test reflection

  • Choosing when to ask questions after each task versus at the end of the test
    • How much time do you have for the test? You need more time to ask questions after each task.
    • How much time do you have to evaluate the results? This will affect how many questions you want to ask.
    • Are you more interested in the individual tasks or the test as a whole?
  • Guidelines/recommendations regarding how to “interview” the users.
    • Quantitative Methods:
      • System Usability Scale questions could be applied here (To get quantifiable results, you need to ask all 10 questions; however, these could be used for more generic feedback as well if you can’t get through all of them) (scale is 1 - strongly disagree to 5 - strongly agree)
        • 1. I think that I would like to use this system frequently.
        • 2. I found the system unnecessarily complex.
        • 3. I thought the system was easy to use.
        • 4. I think that I would need the support of a technical person to be able to use this system.
        • 5. I found the various functions in this system were well integrated.
        • 6. I thought there was too much inconsistency in this system.
        • 7. I would imagine that most people would learn to use this system very quickly.
        • 8. I found the system very cumbersome to use.
        • 9. I felt very confident using the system.
        • 10. I needed to learn a lot of things before I could get going with this system.
      • If there is time, single-ease question can also be used after each task:
        • Overall, how difficult or easy did you find this task?
          • It uses a scale of 1 to 7 (1 - very difficult; 7 - very easy)
    • Qualitative:
      • Interview Questions (could be better to ask these questions in person than with a written survey):
        • How was your overall experience with the system?
        • Is there anything that you would like to comment on other than what you have already thought aloud?
        • Anything else that you would like to share about your interaction with the system?
        • The questions from System Usability Scale could also be used as prompts for getting qualitative feedback.
  • Information to invite users to participate in further activities/follow ups and to receive updates.
  • Additional information that might be useful to document (if applicable):
    • Information to improve the way the test is administered:
      • What did users think about the use of their time? Why or why not?
      • Was 15 minutes too long? Too short? Efficient use of time?
      • How could the test administrator improve the test experience?
      • Additional audiences that the user would recommend.


Section 6 of 6: Using the Test Results

  • Prioritization of the issues found -
    • Generate summary report
    • Present findings to stakeholders
    • Review goals/mission statements of the project
    • Review available resources (i.e. time, financial support, staff members, alignment with institutional goals, etc)
    • Select top issues
  • Education -
    • Training the team
      • Be able to recognize potential Issues
      • Be able to provide possible solutions
    • Build user-centric culture
      • Promote understanding of how to be more user aware
    • Create/update design practices or guides
  • Verification/Validation of the Usability Improvement -
    • Reassess usability
    • Measure performance/effectiveness by keeping metrics (number of clicks, amount of time people stay on a page, amount of time people take to find what they are looking for, etc.)
      • Examples of techniques that can be used: System Usability Scale, A/B test
  • Roadmap Development -
    • Make an action plan / improvement plan
    • Continue to:
      • Involve stakeholders in discussions
      • Educate team members
      • Adapt to changing design standards
      • Integrate usability principles in design cycles