Difference between revisions of "UsabilityCluster/MonthlyMeeting/2017-03-01 MeetingNotes"
From Earth Science Information Partners (ESIP)
Sophisticus (talk | contribs) |
Sophisticus (talk | contribs) |
||
Line 46: | Line 46: | ||
:::* Terms and conditions | :::* Terms and conditions | ||
::* Pre-test survey (Use example questions from DataONE survey and other member’s surveys.) | ::* Pre-test survey (Use example questions from DataONE survey and other member’s surveys.) | ||
− | Demographic information | + | :::* Demographic information |
− | Domain/Area of Study | + | ::::* Domain/Area of Study |
− | Motivation | + | :::* Motivation |
− | Reasons/Interests in the system-under-test | + | ::::* Reasons/Interests in the system-under-test |
− | Experience with the system-under-test | + | :::* Experience with the system-under-test |
− | Expectation with the system-under-test | + | :::* Expectation with the system-under-test |
− | Usability test itself | + | ::* Usability test itself |
− | Length of the test | + | :::* Length of the test |
− | Tasks (appropriate to the length of the test) | + | :::* Tasks (appropriate to the length of the test) |
− | Up to 30 minutes: 3 tasks | + | ::::* Up to 30 minutes: 3 tasks |
− | 30 minutes - 1 hour: 5 tasks | + | ::::* 30 minutes - 1 hour: 5 tasks |
− | More than an hour and up to 90 minutes (more than 90 minutes is not recommended): 7 tasks | + | ::::* More than an hour and up to 90 minutes (more than 90 minutes is not recommended): 7 tasks |
− | Definitions of tasks: | + | ::* Definitions of tasks: |
− | Guidelines/recommendations regarding how to develop the tasks. | + | :::* Guidelines/recommendations regarding how to develop the tasks. |
− | Guidelines/recommendations regarding how the test administrators could interact with the user. | + | :::* Guidelines/recommendations regarding how the test administrators could interact with the user. |
− | E.g. staying unbiased, help asking questions to help users in thinking out loud, try not to distract users. | + | ::::* E.g. staying unbiased, help asking questions to help users in thinking out loud, try not to distract users. |
Would be determined based on the system-under-test and the areas needing evaluations. | Would be determined based on the system-under-test and the areas needing evaluations. | ||
− | Post-test reflection | + | ::* Post-test reflection |
− | Guidelines/recommendations regarding how to “interview” the users. | + | :::* Guidelines/recommendations regarding how to “interview” the users. |
− | Information to invite users to participate in further activities/follow ups and to receive updates. | + | :::* Information to invite users to participate in further activities/follow ups and to receive updates. |
Revision as of 14:26, March 2, 2017
Attendees: Tamar Norkin, Shannon Rauch, Reid Boehm, Madison Langseth, Sophie Hou
1. Continuing to define personas and data archive/repository service areas that could benefit from usability evaluations
- Notes:
- Reid might have a use case (Rmap) that could be used for this activity, and if the timing is appropriate, we could aim to showcase Reid's use case at Summer Meeting.
- Types of roles/personas:
- Producer (P)
- An entity that submits data to an archive/repository.
- User (U)
- An entity that applies data from the archive/repository to other purposes.
- Assessor (A)
- An entity that reviews data from the archive/repository to determine the performance of the archive/repository.
- Data archive/repository operator (O)
- An entity that works with data in the archive/repository to manage/sustain the archive/repository.
- Areas:
- Home page - PUAO
- Search - UAO
- Searching for data using a geospatial/map interface - UAO
- Browse - UAO
- Data ingest forms, including metadata input (initial information) - PO
- Metadata development (tools for editing, updating, managing, and curating metadata) - PO
- Adding new components to a pre-existing user interface - O(PAU)
- Help documents - PU
- Registration to a site or service - PU
- Downloading data (accessing data) - U(A)
- Access to an identified resource (e.g. dataset, software, etc) - U(A)
- Note:
- This is different from item x; item x is intended for the actual process/steps for obtaining the resource, the item xi is intended for locating the access point.
- Would users prefer to get the resource directly from the page or be directed to another website/locate? What happens when the files are too large to include right on the same page?
- Dataset landing page; Collection landing page - U
- Contact request forms and such (i.e., asking for help on something) - PU
- Policies/Terms and Conditions/Rights/Licensing Information - PU
- Additional Notes:
- Possibilities for next steps:
- Write out definitions for the areas. (Possibly in the form of a matrix)
- Solicit use cases.
- Present it to other ESIP groups (Summer)
- A poster or a session?
2. Discussion of developing a usability test framework
- Notes:
- Introduction
- Purpose of the test
- Terms and conditions
- Pre-test survey (Use example questions from DataONE survey and other member’s surveys.)
- Demographic information
- Domain/Area of Study
- Motivation
- Reasons/Interests in the system-under-test
- Experience with the system-under-test
- Expectation with the system-under-test
- Usability test itself
- Length of the test
- Tasks (appropriate to the length of the test)
- Up to 30 minutes: 3 tasks
- 30 minutes - 1 hour: 5 tasks
- More than an hour and up to 90 minutes (more than 90 minutes is not recommended): 7 tasks
- Definitions of tasks:
- Guidelines/recommendations regarding how to develop the tasks.
- Guidelines/recommendations regarding how the test administrators could interact with the user.
- E.g. staying unbiased, help asking questions to help users in thinking out loud, try not to distract users.
Would be determined based on the system-under-test and the areas needing evaluations.
- Post-test reflection
- Guidelines/recommendations regarding how to “interview” the users.
- Information to invite users to participate in further activities/follow ups and to receive updates.