ch21notes

toc

=Further Evaluation: Generic Techniques and Current Issues=

21.1 Introduction: Establishing the context for evaluation
- establishing evaluation objectives means considering each of the elements of the IMPACT framework: - Intention: clarify the aims for the evaluation project - Metrics and measures: What is to be measured, how and why? Make sure each planned measure actually helps to answer evaluation questions - People: What is the largest population for the technology being evaluated? How will they be represented in evaluation work? - Activities: What activities will be supported by the technology? Use scenarios to to set the scene for users and to draw up the list of actions which will undertake. - Contexts: What aspects of the wider social and physical context may affect the way the technology is used? - Technologies: What hardware and software will be used to deliver the product? How far can or should these be used in the evaluation? What tools are needed to support the evaluation process itself?

21.2 Further techniques for evaluation with users
Questionnaires and checklists - questionnaires are not an easy option for collecting evaluation data. - however, good questionnaires must have questions that. . . - are understandable - are UNambiguous - collect data which actually answers evaluation questions - can be analysed easily - response rates are usually low (usually under 10% because of lack of interest in the topic being evaluated) - use close-ended questions with specific options - checklists are better: YES/NO to certain statements. example: I feel confident using Microsoft Word 2006. Yes/No. - however, these do not provide details.

Participatory heuristic evaluation - time-consuming but helpful for introducing user perspective into early evaluation - extends the power of technique without adding to the effort required - expanded list of heuristics is provided, based on those of Nielsen and Mack (1994) - list provides real-world context through introduction of users' jobs and and tasks

Co-discovery - a naturalistic, informal technique which is particularly good for capturing first impressions - best used in later stages of design - users explore new technology in pairs - elicits a more natural flow of comments and each person encourages the other to try new interactions they may not have thought of in isolation - term originates from Kemp and Van Gelderen (1996)

Evaluation without being there - relatively finished software is required for this - users can participate, esp if application is web-based - approach developed by Hartson and colleagues at Virginia Polytechnic Institute and State University (1998) - centres around 'critical incidents' as they happen or very shortly afterwards - 'critical incidents' = anything related to the application that disturbs the task in hand

Physical and physiological measures - direct measures as indications of user's reactions - physical example: eye-movement tracking to show users' changing focus on different areas of the screen would show which micro features of a user interface has attracted attention - physiological example: increased heart rate, rate of respiration, skin temperature, pulse, etc. - not completely accurate as cannot assess which emotion is being felt through each measure

21.3 Predictive evaluation without users
Claims analysis - should be initiated in the early stages of design and can be used throughout process - they document the envisaged positive effects of the feature but also potential undesirable consequences - canonical form from Carroll (1992): //IN, // CAUSES <'desirable' psychological consequences> BUT MAY ALSO CAUSE <'undesirable' psychological consequences> - claims cause designer to consider likely advantages of the feature and is basically an early evaluation technique in itself.

The cognitive walkthrough technique - rigorous paper-based technique for checking through the detailed design and logic steps in user interaction - used once a full description of user interaction has been completed but before software has been built - each step in concrete scenarios is checked for usability flaws - in essence, entails a usability analyst stepping through the cognitive tasks a user must carry out in interacting with the technology - is based on well-established theory rather than trial and error or heuristics - inputs to the process: - an understanding of users, their tasks, skills and abilities - set of concrete scenarios representing both very common and uncommon sequence of tasks - complete description of user interface - once these are gathered, the following 4 questions are asked: - will the user try to achieve the right effect? - will the user notice that the correct action is available? - will the user associate the correct action with the effect the user is trying to achieve? - If correct action is performed, will user see that progress is being made towards solution of the task? - if any question is NO then a usability problem has been identified but may not be redesigned until later