Difference between revisions of "UsabilityCluster/MonthlyMeeting/2016-10-05 MeetingNotes"

From Earth Science Information Partners (ESIP)
Line 5: Line 5:
  
  
(for the presentations used during this meeting, please see the [[Usability_Cluster_Presentations| "Presentation"]] section of the wiki)
+
(for the presentations used during this meeting, please see the <u>[[Usability_Cluster_Presentations| "Presentation"]]<u> section of the wiki)
  
  

Revision as of 19:05, October 5, 2016

Attendees: Dave Bushnell, Tamar Norkin, Bob Downs, Madison Langseth, Ward Fleri, Bruce Caron, Sophie Hou, Rachel Volentine


Action: Madison and Sophie will draft a session proposal and send out to the Cluster for feedback prior to Oct. 31 submission deadline.


(for the presentations used during this meeting, please see the "Presentation" section of the wiki)


1. Overview of Usability (Sophie)

  • 5 components of Usability: learnability, efficiency, memorability, errors, and satisfaction.
  • Utility + Usability = Useful


2. Process to Establish Usability Benchmark (Ward)

  • Presentation Title: Benchmarking Experiences from the Immune Epitope Database
  • Worked with usability engineers from Virginia Tech in order to improve the Immune Epitope Database and Analysis Resource.
  • Feedback and help tickets from users helped show that there were several usability-related issues that the users were encountering.
  • Surveys were set up for over 3 months to solicit feedback formally and to define different user groups and the respective needs.
  • Usability metric was set up to determine usability issues, and the result established a baseline for future comparison.
  • Question: How was the audience for the survey determined?
  • User group consisted of people who are mainly involved with life sciences (a fairly homogeneous group).
  • User Observation sessions were also conducted.
  • Asked 10 users to perform 10 sample queries.
  • Compensations were provided to the participants (gift cards).
  • Question: What are the pros and cons of using the System Usability Scale?
  • Ward would recommend the System Usability Scale because it is simple to use, but the downside is that it is difficult to get enough responses to ensure that the results are statistically sound.


3. Usability Testing - In Person vs. Virtual / Comparison with Focus Group (Sophie)

  • Four Stages: Prep, Intro, Testing, Debrief.
  • Discount usability might be a faster, easier approach to usability testing.
  • At least 5 users for usability testing. There is an upper bound, but that is determined by resources available, etc.
  • Preparation: areas to consider include, establishing goals, determining the resources available, are there ethical or legal requirements?, selecting users that reflect the personas, setting up the test environment, rehearsing/pilot testing if possible.
  • Introduction: Welcome (Welcoming the participant is extremely important for making users feel comfortable), explain the purpose of the test and the testing procedures, let participants know that they can terminate the test at any time.
  • Test: Very important not to bias users; encourage users to think aloud.
  • Debrief: Users can ask questions and provide feedback and complete a final survey or questionnaire. Be sure to summarize results soon after and provide to team as quickly as possible.
  • Focus Groups / Interviews:
  • People often say what they think, not necessarily what they actually do.
  • Important for determining users’ opinions and level of satisfaction.


4. Case Study from USGS (Madison)

  • Presentation Title: Sciencebase Data Release Landing Pages Usability Study.
  • Several goals were determined for ensuring improvements could be achieved for Sciencebase Data Release Landing page.
  • Out of these goals, 3 specific tasks were selected for the usability test.
  • Test environment was established at a usability lab that was dedicated for usability testing.
  • Test logistics:
  • Pre-test questionnaire to collect demographic information.
  • Test session duration is about 15 minutes to respect test participant’s time and to minimize test fatigue.
  • Post test assessment was also conducted.
  • The test results showed many insights to user’s behavior, including the confirmation that users might do things differently from what they think or say that they will do.


5. Open Discussion