UsabilityCluster/MonthlyMeeting/2018-03-08 MeetingNotes

From Federation of Earth Science Information Partners

Attendees: Madison Langseth, Sophie Hou, Connor Scully-Allison, Megan Carter, Kerstin Lehnert, Ge Peng, Rachel Volentine, Nicolau Manubens

Agenda:

1. Announcement for upcoming activities:

  • Session abstract(s) for ESIP Summer Meeting
  • Call open until April 30th
  • Some Ideas
  • Possible Combined Use Cases
  • Connor’s QC Application - Use Case of the Usability Framework in the process of application development
  • Provide content for parts of the test framework
  • Using Connor’s app as a use case
  • End goal is involving users in testing (win-win)
  • IEDA
  • Another Use Case that can be in this session
  • Possibly we may break these up into two sessions
  • Connor: Either [one session with both use cases or two sessions featuring their own study] is fine
  • Kerstin: My idea for the session is a high level search. But I would like to include some more advanced searches in the domain specific areas. This is a more involved use case and would require a dedicated workshop
  • We are leaning towards two sessions
  • A combined session might be enough if its just an introduction but with developed products this is stronger option
  • Card sorting and USGS' application of this technique (Data Management Card Sort)
  • Madison has been working with Rachel
  • Developing card sort with USGS website
  • Optimal sort workshop
  • Sharing the test activity with us
  • Seeking volunteers for this card sort
  • Takes about 15 - 20 minutes
  • Drag and drop cards: to organize these ideas together
  • It’s fun!
  • Link to Exercise
  • Sophie
  • Used to understand how users expect information to be mapped
  • Informs us on how to make “intuitive” UIs
  • Try it!
  • You get a lot of insight into how you think and how your users might think
  • Review of any initial content for Connor's usability test script
  • Further development on UI itself
  • More in april

2.Work on the Cluster Update slides (to be sent back to Bruce)

  • Posters for summer meetings are being replaced
  • Instead of posters as with past years we have requested more slides
  • Cover key goals of the cluster of what we are doing and what we will be doing
  • They are supposed to attract people to our work
  • Help us detailing support we might be able to gain
  • Some Parts
  • Who we are, What we want to do, Why,
  • Leadership & goals slides
  • Key Questions we are addressing
  • Can take from about us
  • Key Activities
  • We can possibly show full slides at April meeting
  • Next steps: get feedback from the rest of the group before we submit in April
  • Connor: Opportunities for collaboration?

3.Formulate a usability test script for the Interdisciplinary Earth Data Alliance (IEDA) Integrated Catalog

  • Kerstin Intro
  • Focus in Data world - Physical Data Samples
  • Megan
  • Data Manager at IEDA - Passionate about physical samples
  • One of the core people in testing systems when developing new applications
  • IEDA
  • Partnership between different data systems
  • Diverse set of systems operating under IEDA
  • Challenges in creating applications
  • Hard to create a consistent interface
  • Working on creating a more consistent UX
  • NSF Demand
  • Create applications that provide access to all our data holding
  • IEDA Integrated Catalog (IC)
  • Funded by earth cube
  • Can be scaled up to include other systems
  • IC
  • Released in December
  • We need feedback on it’s bugs or capabilities
  • We need to understand what the community is looking for
  • Catalog across many systems
  • Broader user audience
  • People from many domains
  • Free text search
  • Built on many vocabularies
  • Makes these DBs available with more details compared to overarching search engines
  • Sophie Shows IEDA Website
  • Kerstin: We are grateful for the USability Cluster featuring this interface and we can learn a lot about what is the best way to make test cases and do usability studies
  • Megan : I am interested in your methods as well
  • Sophie :
  • Usability Framework overview
  • Usability Tests can be characterized by several components
  • “Usability is an assessment of quality”
  • 5 different areas of Usability
  • Learnability
  • How easy is it for someone to learn your system
  • Memorability
  • How easy is it to recall
  • Does it conform to their mental model
  • Error Mitigation
  • Is it easy for users to make mistakes?
  • Stages of a lifecycle when we can apply a Usability Study
  • There will be different techniques for different stages of development
  • Usability Framework
  • Focused on qualitative side of Testing
  • Categories
  • Inviting Users
  • One needs to know the target audience
  • It’s important that the system has a certian level of functionality
  • Madison or Rachel input?
  • Work in progress
  • Give us feedback
  • Kerstin
  • Excited to hear about this
  • Sections as they apply to IEDA
  • Linear Process
  • We do test when we know that we expect helpful feedback from users
  • We should know how quickly they can perform the tasks
  • Understanding how delighted users are when they use the system
  • In Order
  • Define Users
  • Understand the specific characteristics and requirements of your audience
  • Make sure your test audience reflects these users
  • We are at the mercy of the ESIP membership
  • Testing with general users can be good
  • We can get good feedback on big issues from general users
  • Kerstin: Bias is towards community that manages and develops data. Users will likely be data users.
  • Sophie: We can use the feedback to iterate over the test and better understand how to gear it towards our actual test group. Importance of pilot testing. Helps classify general problems and specialty problems.
  • ESIP will be Kerstin’s beta test
  • Thinking about how to set up the session
  • Specifically upon seeing the user
  • Orientation
  • Some people are familiar with User Studies
  • We are striving to put users at ease when they first arrive
  • Sending verbage to users
  • What is average size of tests?
  • Statistical significance?
  • There is a rule of thumb
  • Quantitative tests are different we can evaluate things statistically
  • Qualitative - 5 to 7 to 10 and beyond
  • We can capture many key issues with 5 to 10 people
  • How well does test user represent what we want to study
  • There are many different metrics for classifying our users
  • If we know our key groups then 5 of each group would be best
  • But we can get general feedback from 5 general users
  • This can be fine for being in Beta
  • Recap
  • Recommend that the intro should fit into email
  • A possible piece to include
  • Asking pre-test questions to understand who your users are
  • There are some example questions
  • Usability Test itself
  • Length of test, how to define test, how to execute test
  • Some things are already answered for us
  • By virtue of doing this at ESIP
  • Pen and paper gives us good feedback on the test itself
  • Format of the tests can be different
  • Length of test
  • Aim for 15 mins
  • Cannot include many tasks
  • 3-5 mins per task
  • Aim towards not complex
  • Get more people over all to accomplish more tasks and split who does what
  • How many tasks do IEDA want to moderate
  • Kerstin: We will talk about this offline

4.Discuss the Framework's "Post-Test Reflection" section (Undiscussed)