Lectures(.doc+format)

A lot of people (including me prefer studying from word documents instead of powerpoint slides). They are easier to print as well. I copied all the lectures and pasted them here, just copy the following onto a word document: =**Class 1: Introduction**= • In pairs: • What did you do this summer? • What do you expect to gain out of this course? • Name one thing you think is well (or poorly) designed. What makes you reach this conclusion? • Michael Jones • David Gelb • Access and availability • Text • Assignments • Important Notes • Readings • What does this even mean? • Imagining? • Audience? • Wired World? • People • Activities • Context • Technologies • Holistic, interdependent relations among these factors • People are well versed in particular skill sets - whether they’re trained to be or not • Machines do some things extraordinarily well - and require a lot of effort to mimic human skills even partially • Design should speak to strengths of both - esp. when electronic technologies are embedded in physical spaces
 * A bit about us…**
 * Course Syllabus**
 * Imagining the Audience in a Wired World**
 * PACT**
 * Technology in Human Context**

=**Class 2: Goals of User-**= =**Centered Design**= • What is good design? What is bad design? • Who benefits from good design? • What is the focus of design? • Who does design? • What is interaction design? • Identifying needs and requirements • conceptualizing alternatives that meet needs • Building prototypes that embody concepts • Evaluating success (iterations) • Effectiveness • Efficiency • Safety • Utility • Learnability • Memorability • Does it do what it’s supposed to do? • Do users’ conceptualization of what it’s supposed to do match designer’s? • Examples? • Does it do what it’s supposed to do rapidly, with minimal interruption or effort? • What are user’s expectations of efficiency? • Examples? • Does the design protect the user from harm or error? • Types and severity of human error vary greatly - examples? • Does it have the right mix of functionality? • How is “right” determined? • Function overload vs. simplicity • Examples? • How easy is it to learn how to use the product? • What previous knowledge is expected from user? • Ten-minute rule (and its limitations) • Examples? • Once a user learns how to use the product, can they remember what they’ve learned? • Frequency and regularity of use main factor - others? Examples? • More vaguely defined, but no less important • Aesthetic bias - things that appear well designed are (at least initially) perceived to be functional, easy to use • Helpful • Motivating • Aesthetically pleasing • Support creativity • Rewarding • Enjoyable • Entertaining • Fun • Satisfying • It is difficult (impossible?) to design something that meets all the above • Design often requires tradeoffs and judgment calls - ideally informed by users but not always • Design of Everyday Things • Visibility • Feedback • Constraints • Mapping • Consistency • Affordances
 * Design as Craft**
 * Design Process**
 * Usability Goals**
 * Effectiveness**
 * Efficiency**
 * Safety**
 * Utility**
 * Learnability**
 * Memorability**
 * User Experience Goals**
 * User Experience Goals**
 * Design as Balance**
 * Norman’s Principles**

• Is functionality of object and a user’s potential interaction with it evident? • Are there cases where hiding things might be a good idea? • Examples? • On using object, does the user receive some evident response or result? • Does this response make sense to the user and encourage continued correct interaction? • Does the system deliberately constrain the user’s potential? • Why would you want to constrain certain paths of action? • Physical, logical and cultural constraints • Does the system mimic existing logical and cultural spatial/temporal relations? • Problems with arbitrary or random mapping • Does a given action produce similar results every time? • Is the interface consistent with similar products? • Does the design provide intuitive clues on what can or should be done =**Class 3 : People, Activities,**= =**Context and Technologies**= • (Continued from last week…see other two in last week notes) • Constraints • Mapping • Consistency • Affordances • Does the system deliberately constrain the user’s potential? • Why would you want to constrain certain paths of action? • Physical, logical and cultural constraints • Does the system mimic existing logical and cultural spatial/temporal relations? • Problems with arbitrary or random mapping • Does a given action produce similar results every time? • Is the interface consistent with similar products? • Does the design provide intuitive clues on what can or should be done? • An overused word? • People • Activities • Context • Technologies • Holistic, interdependent relations among these factors • Or, people come in different shapes and sizes • User groups are rarely monolithic or homogeneous - often a range of complexity to consider • Limits can be considered or maintained (and is often done - examples?) but should be done with utmost care • Height/weight • Strength and ability differences (coupled with age or training) • Use of senses • Physical abilities • Language variety and ability • Cultural, social and religious custom • Learning styles (e.g., multiple intelligences) • Attention and memory • Mental models • People of different sizes and backgrounds have different needs • Novice/experienced users • Lay/expert users • Irregular/Regular users • Organizational/Broad Social contexts - a range of abilities, skills, requirements • Or, people of different shapes and sizes need/want to do different things • Purpose of activity and what enables/constraints it • Also unintendend purposes and consequences - many of which you want to design against (esp. since people have a tendency to do what they want, not what they need, should or must do…) • Regular vs. infrequent activity - e.g., twenty times a day vs. once every twenty yrs. • Time as pressure - does it work when necessary or under acute load? • Continuous vs. discrete action - one-off action vs. process, and how process is handled • Response time - does it react reliably as required? Synchronous vs. asynchronous • Solo work or requires cooperation with others (if so, interdependencies and bottlenecks become critical variables) • Defined vs. vague tasks - defined can be programmed and controlled, vague requires a lot more flexibility • Some tasks are mission-critical - failure is not an option • Handling error and unintended consequence - users behave in mysterious ways (and we shouldn’t be surprised by this…) • Error and unsafe use - not just user education, but also buy-in • Input methods • Data structures • Information Flow • Output methods • Feedback • Not just important in computing - physical examples? • Or, different people do different things in a range of environments (some of which you can’t easily control) • Contextual factors may greatly impact people and what tools they use to deal with their tasks - or may be easily predictable and planned for… • Indoors? Outdoors? • Mobile? Stationary? (Implications to Access?) • Loud? Quiet? • Busy? Still? • Dangerous? Safe? • Access to assistance? • Social norms of use (and their evolution?) • Organizational - internal conflict between individual and collective goals? (CSCW examples?) • The things that a range of diverse people use to accomplish an equally diverse range of tasks in particular contexts (getting confusing yet? It should be…) • Technology broadly defined - realization and formalization of technique (Ellul) • Design issues similar to #9 and #10 of “activity” section (task and mediation…) • Two definitions and its sources • a) figurative/actual - to what good purpose? • b) literal - who benefits? • Technology is rarely the answer to all social or organizational problems • Esp. in environment of thoughtless technology hype, asking this question helps. • Balancing these (often conflicting) principles is the whole point (and the whole problem • “There are no rules…and here they are.” (McCloud, 2006) • Users and their Requirements • Conceptual Design • Physical Design • Protoyping/Evaluation • Evaluation and Testing
 * Visibility**
 * Feedback**
 * Constraints**
 * Mapping**
 * Consistency**
 * Affordances**
 * Norman’s Principles**
 * Constraints**
 * Mapping**
 * Consistency**
 * Affordances**
 * PACT**
 * People**
 * Physical differences**
 * Psychological/Social**
 * Use Differences**
 * Activities**
 * Temporal Dimension**
 * Cooperation and Complexity**
 * Safety and Error**
 * Task and Mediation**
 * Context**
 * Physical Context**
 * Social/Org Contexts**
 * Technologies**
 * Cui Bono? A slice of healthy skepticism**
 * Design Cycles**
 * (Universal?) Elements of Design**

=**Class 4 : Principles of Design/Understanding Users (1)**= =**Administrivia**= • Due to book shortage, lectures a bit more detailed, so we’re a bit behind • Besides, I’d rather have dialogue than information dumping • This week - Chs. 3, 5, and a bit of 15 • Next week - a bit of 5 and 15 • Notes as guide to test • Balancing these (often conflicting) principles is the whole point (and the whole problem • “There are no rules…and here they are.” (McCloud, 2006) • Users and their Requirements • Conceptual Design • Physical Design • Protoyping/Evaluation • Evaluation and Testing • Accessibility • Usability • Acceptability • Engagement • Inclusive/universal design principles to address exclusion • Varying ability is the norm, not the exception • Designs for people with disabilities work for others too • Personal user experience and usability is affected by design constraints and choices • Equitable • Flexible • Simple • Perceptible • Tolerant • Effortless • “Just right” size and space (p. 53) • Often hard to design everything for everyone - how to decide? • Fixed/changing - is difference innate and immutable, or learned? • Common/rare - is difference a reasonably common occurrence? • Cheap/Expensive - what is the cost of accommodation? • Already nicely covered previously… • Early focus on users and their tasks • Empirical measurement (more to come there) • Iterative design • Coevolution and mutual dependence • Dynamics of product acceptability/adoption - not just technology, people or activities, but adaption in context • Often more complex than most designers care to admit • Political - trust, organizational issues, who benefits, effects of networks • Convenience - easy, effortless? • Cultural or social habits - violation of social norms? Evolution of norms? • Usefulness - relevance to activities in context • Economic - price points, changes in business model • Creation of bond, attachment - a feeling of personal relationship, even anthropomorphism (examples?) • Function of both aesthetics and usability • Identity - authenticity, cohesion • Adaptivity - change, personalization • Narrative - not just product but story • Immersion - feeling of wholeness, transportation elsewhere • Flow - transitions between states clear, fluid, seamless • Norman’s seven-step process • Essentially a cognitive model - humans as technological processes - taking inputs, processing, providing outputs • Gulfs of evaluation and execution • Memory (sensory, working and long-term) • Recall and recognition • Attention • Perception • Gestalt processing • Representation • Mental models • Action and persuasion
 * Design Cycles**
 * (Universal?) Elements of Design**
 * Principles of Design**
 * Accessibility**
 * Principles of Universal Design**
 * Decision Tree**
 * Usability**
 * Acceptability**
 * Factors influencing Acceptability**
 * Engagement**
 * Functions of Engagement**
 * People and their actions**
 * Important Elements of Cognitive Psychology**

=**Class 5: Understanding Individual Users**= =**Important Elements of Cognitive Psychology**= • Memory (sensory, working and long-term) • Recall and recognition • Attention • Perception • Gestalt processing • Representation • Mental models • Action and persuasion • Complex • Active • Contextual • Constructive/Additive • Short term storage - 30s • Rehearsal and repetition - visual and spatial • “chunking” - mnemonics and relations help memorizing • Examples? • Near unlimited store • Filtered - most working memory is dumped • Relational - we remember things in context and connection - whether the connections make sense or not • Multimodal and complex • Recall - specific “bits” and their retrieval • Recognition - relational connections • Humans are not good at recall, excellent at recognition • Computers are not good at recognition, excellent at recall • Design implications? • Working memory - decay and displacement • Long-term - retroactive, proactive, and compromised networks • Direct manipulation and representation • Chunking • Relational menu vs. explicit instruction (e.g., CLI) • Essential to human activity - a lot of “human error” can be traced to faulty attention • Like memory, complex as well, influenced by many factors • Controlled - focused but limited, slow but thorough, hard to sustain for long periods of time • Automatic - fast, allows for multitasking, nearly subconscious - but also vague and sporadic • Central/peripheral processing • Vigilance • Mental/physical workload • Stress and frustration • Competing things to attend to • Time pressures • Design implications? • Partially a sensory phenomenon, but influenced by past expectations, culture, motivation • Red/green/blue, Paris in the Spring examples • Consistency is assumed - can lead to illusion • Proximity - close things grouped • Continuity - patterns are completed • Part/Whole - we understand parts as part of system • Similarity - similar things are seen as together • Closure - we fill in gaps • Examples? • Shading • Linear Perspective • Relative size and height • Motion parallax • Overlap and Texture • Understandable Memorable • Few Unambiguous • Informative Attractive • Distinct Compact • Legible Coherent • Familiar Extensible • Parsimonious models of reality • Leverages long-term memory to focus attention on important things • Conflict between designer, user and system models are common and a major source of design problems • Incomplete • Poorly tested • Unstable • Unreliable • Vague boundaries • Superstitious or illogical • And yet… • Consistency and Coherence • Design for Error and Feedback • Preparation for cognitive dissonance and frustration • Re-education and persuasion (attention issue) • New devices and tools challenge pre-existing notions of interaction - require readjustment on the part of user • Examples? • Fun might be the best way to educate?
 * Memory is…**
 * Working/Short-Term**
 * Long-Term**
 * Recall and Recognition**
 * Forgetting**
 * Palette Example**
 * Attention**
 * Controlled vs. Automatic**
 * Factors influencing attention**
 * Perception**
 * Gestalt**
 * Depth**
 * Example: Icons**
 * Mental Models**
 * Norman** **on Models**
 * Leveraging Mental Models**
 * Examples: New input/output devices**

=**Class 6: Qualitative Research Methods**= =**Research**= • What it is - and what it is not • Qualitative vs. Quantitative - general differences? • No “right” answer - triangulation to help verify information • IPS - analysis of 1) how school adapted to computers in the classroom and 2) how students could inform design of educational software tools • FSAE - analysis of how racecar team engaged organizational learning successfully despite 1) having limited resources and 2) turnover 40-66% annually • Global Seminar - online class analyzed through content analysis • Portsmouth Tavern - analysis of student/townie relations in a local pub • Inquiries for general vs. specific information • Information in human/social context • Builds and tells a narrative • With respect to design - narrative structured around informing design (some information minimized) • Secondary source research / Content Analysis • Interviews/Focus Groups • Case Studies/Grounded Theory • Ethnography/Observation • Contextual Inquiry/Action Research • Secondary source - learning from what’s already out there • Critical reading - what was learned when from whom via what methods - and consequently, what was not learned • Challenges? Costs? • Narrative created through analysis of existing or generated artefacts • Attempt to find common patterns or occurences • Structured - either computer-based (e.g., nVivo/NUDIST) or manually (e.g., index cards, whiteboards) • Challenges? Costs? • Semi-structured discussions with individuals to discover their perspective or opinion • Should be recorded and analyzed at later date via content analysis • Challenges? Costs? • Collective interview - allows to gain more opinion in same time • Also generates information via dialogue and debate - especially in homogeneous groups • Challenges? Costs? • Focusing strongly on one instance - “thick description” • Goal is not generalizability but transferability - case is unique but can still highlight trends in similar cases • Can build towards grounded (or intuitive) theory - particularly if other cases confirm • Challenges? Costs? • Learning through observing people and their activities in context • Ethnography - a more detailed, holistic investigation of people and their social contexts • Requires empathy but also some degree of neutrality (or else you “go native”) • Challenges? Costs? • Often in design, the point is not simply to understand but to influence change • Contextual inquiry - elements of observation but also elements of changing that which is observed • Requires buy-in of those in context • Challenges? Costs? • Flow • Sequence • Artefact • Cultural • Physical • Analysis of flow of information (and its potential breakdown) • Individuals, roles and groups - and their relations in handling information • All potential links, big and small • Actual actions vs. theoretical (e.g., the org chart problem) • Automatic actions - things that people do without thinking • Look for breakdowns • Work task representation - flow for tasks • Intent - what sequence is supposed to do • Trigger - what sets the sequence in motion • Steps - how this is done • Breakdowns - problems in procedure • Steps that impact performance locally (influence from far away sources) • Actions or intents that don’t fit the “right” model - breakdowns • Branches of action - flowchart and decision making • The tools and artefacts of process • Content • Structure • Presentation • Annotation and use over time • Artefacts as “captures” of flow and sequence • Some are less useful or functional than others - but still used - why? • Change in artefact form, presentation might cause immediate concern or revolt from users? • Context and its collective culture • Influencers - leaders (formal and informal) whose actions carry weight (and those who don’t) • Extent and direction of influence • Place and proximity can infleunce flow and sequence • Physical layout • Workflow movement diagrams • Industrial design issues
 * Four personal examples**
 * Four examples cont.**
 * Qualitative Research**
 * Methods**
 * Secondary Source Analysis**
 * Content Analysis**
 * Interviews**
 * Focus Groups**
 * Case Study**
 * Ethnography & Observation**
 * Contextual Inquiry and Action Research**
 * CI Models**
 * Flow**
 * Things to consider…**
 * Sequence Models**
 * Things to consider…**
 * Artefact Model**
 * Things to consider…**
 * Cultural Model**
 * Physical Model**

=**Class 7: Quantitative**= =**Research Methods**= =**Quantitative Methods**= • Unlike qualitative, involves metrics reduced to numbers • Why helpful? • Why problematic? • Common method of obtaining information from broad cross-section of people • Quality of information directly depends on quality and purpose of questions and who is surveyed • Online tools help - e.g., http://www.surveymonkey.com • Who is included in sample? • How are they reached? • Response rate issues • Responders vs. non-responders - are they qualitatively different groups? • Directly impacts quality of results - e.g., 1936 presidential poll • Understandable • Unambiguous • Collects data that is actually valuable • Can be easily analyzed • I’d add - limited in scope - take respondent’s attention span and willingness to help into account! • Specific better than general • Open/closed-ended questions - benefits and challenges • Opening and closing preamble and instructions important - especially if you’re not there to supervise • Test it before you use it • 1-5, 1-7, 1-9 scales • Midpoint - what does it mean? If no opinion, give that option • Take care in too many consecutive items with same polarity of options - leads to patterned responses • Semantic differential can be effective • Set of related questions measuring attitudes, beliefs, orientations etc. • Ex: multiple intelligences (others?) • Scales must actually measure what they claim, not be redundant • Verify authenticity (esp. in web searches) - many scales are meaningless • Controlled specific measurement of phenomenon • Often used to determine causation - not just X related to Y but X causes Y • Inferential vs. descriptive statistics - not simply 68% do X, but that this leads to something else • Benefits: Controlled environment, measured responses, quantitative data that can lead to causal links • Limitations: Must simplify environment to minimize other potential explanatory variables, creating rather fake environment and tasks • Observation without being there - quantitative artifacts - e.g., Web access logs, click regions, eye tracking • PeopleMeter example • Records consequences of actual action - but potential privacy and collection issues (e.g., social networking helmet) • Hey, the course title means something! • Hierarchical task analysis and GOMS - descriptions of cause and effect at functional level • Definitely important for planning computing systems (and often used - e.g. flowcharts, UML) • Why? Computers are not all that bright. • Goals • Operators • Methods • Selection Rules • Very possible that human and system GOMS differ - which causes problems • Methods are like tools in a toolbox - all are useful for something - but you don’t hammer a nail with a screwdriver • Goals of research should primarily influence choice of tools • What else does? • Type of data needed - qual vs. quant, descriptive vs. inferential • Cost and time to collect data • Cost and time to analyze data • Triangulation needs • Contextual requirements • 2002 racecar seat • Partially materials selection, stress calculations etc. - but mostly ergonomic • Quantitative measures of 5-95% percentile team members, and everyone in between • Lots of individual testing though too • Social network questionnaire - who trusted whom in six domains • Correlated with three scales - interdependence, independence and proactivity • Correlational study - what relations existed between scales and position in network? • Relations verified by respondent reflection and personal experience • Redesign implications
 * Surveys/Questionnaires**
 * Sampling**
 * General Questionnaire Guidelines**
 * Question Guidelines**
 * Likert Scale Q**
 * Scales**
 * Experiments**
 * Benefits and Limitations**
 * Data Mining**
 * Imagining the Audience in a Wired World**
 * GOMS**
 * Choosing Tools**
 * Other factors influencing method choice**
 * Physical research example**
 * Org. research example**

=**Class 8: Understanding Interaction in Complex Environments**= =**Complexity and Interaction**= • What technologies may get more complex to use when more people are involved? • Designing for lots of simultaneous users can be daunting • New technologies and contexts can also be difficult • Why pay attention to ergonomics? • Inclusive design (design not just for average, but all?) • Efficient interaction (Fitts’ law) • Selling point (ergonomic design in consumer products) • Legal requirements (carpal tunnel questions) • Prototype testing (virtual or real) • Response time • Environmental simulation • Power and load characteristics • Acute and chronic use (some effects only show up over repeated use) • People interact with other people using other tools to realize activity • Communication and coordination becomes essential - interaction challenges? • Internal representations - individual mental models of reality • External representations - anything outside individual that guides activity (e.g., layout, notes, diagrams, etc.) • Shared representations - individuals come together over external representations to create shared understanding (or confusion…) • Treats user interaction as a set of defined plans • Plans in context - often contingent and less cut and dry than expected • Humans don’t crash when plans fail - we adapt, create new plans on the fly • Flowcharting interaction patterns - order of actions, decisions made • Arbitrary and acontextual - how things should be done, not necessarily how they are - but a good first step nonetheless
 * Ergonomics**
 * Designing for ergonomics**
 * Distributed Cognition**
 * Internal/External/Shared**
 * Plans and Situated Actions**
 * Hierarchical Task Analysis**

• Going through the task analysis as actually performed in context • Do users actually perform tasks as set out in plans? • If not, what problems do they have? • “think aloud” strategy - get users to vocalize their decision patterns and their confusion • Represents complexity of interaction among subjects, objects, artefacts and cultural expectations • As a theory, can be hard to use in practice - but also quite powerful • Subject - people • Object - goal, task • Artefact - tools, technologies • Community - others affected by activity • Division of Labour - Power relations • Praxis - norms governing activity • Primary - conflict at node (e.g., two people, different notions) • Secondary - conflict between nodes (e.g., power relations frustrating action) • Tertiary - conflicts when activites are redesigned (e.g., new process conflicts with models used in old) • Quarternary - conflicts between simultaneous activities (e.g., one action contradicts another) • Collaborative virtual environments - VR which embodies user in virtual space • Affords interaction with other embodied users in real time • Second Life example • Interesting way to bridge distance gaps • Time gaps a problem • Orientation issues in virtual world - people talking to walls, etc. (and why it doesn’t matter) • Confusing spaces and avatars - fantastic displays but for what purpose? • Subjects - conference attendees • Object - engage in collaboration, talk • Artefacts - virtual conference environment, posters, websites, etc. • Community - attendees, lurkers • Division of Labour - who is/is not allowed to talk at any given time, access restrictions • Praxis - expectations of conference environment, turn-taking, etc. • Difficult to analyze as whole - there’s no “right” way to attend a conference • Specific elements can be analyzed though - e.g., conference registration and payment
 * Cognitive Walkthrough**
 * Activity Theory**
 * Nodes in Activity Triangle**
 * Contradictions**
 * Example: CVEs**
 * CVEs in Conferences**
 * Activity Theory Analysis**
 * Task Analysis**

• A narrative that is accessible and useful to all stakeholders (designers and users) • A narrative that outlines complexity to designers • A narrative that envelops full complexity of design in context • Action and reflection balance • Fluidity and concreteness balance • Envelops external factors/constrains • Allows for understanding of many effects at many levels • Can build scientific understanding (grounded theory) • Anecdotes, observation, interview transcripts etc. • As close to user’s direct experience as possible - ideally in their own words expressing their own issues • Left just at this level - just a collection of interesting stories, no attempt at creating common themes • Identification of common themes and problems • Builds conceptual models • Used for generating ideas for design alternatives, specifying requirements • One level of abstraction from user stories - but does not yet address how technologies resolve issues • From user stories and conceptual scenarios, done can build a list of what the technology should (and should not) be or do • Functional (e..g., task oriented - what it does) or non-functional (e.g., aesthetic, legal, cultural, ease of use issues) • Must have (without this, it’s useless) • Should have (would be a clear value-added requirement but will work without it) • Could have (might be nice but not essential) • Want but Won’t have (can wait until future iterations) • Functional - must have - non-functional, should/could/want to have • Defines requirements from conceptual scenarios more concretely • Builds physical/prototypical models • Starts involving technologies and interaction patterns at a general level • May be many concrete scenarios from one conceptual scenario • Formalized interaction patterns • All design questions resolved • Can be modeled using formal procedures and language (e.g., UML) • In software, this is “pseudocode” - in hardware, the first functional iteration • Important to collect user stories - and from this, build conceptual scenarios from which concrete scenarios can be derived • Documentation of stories through text and other media - the wikis will help there… • HIC example in text quite good - scenarios of playing an MP3 on a home information system, with annotations of potential issues
 * Class 9: Scenarios and Requirements**
 * Scenarios**
 * Scenario elements**
 * User Stories**
 * Conceptual scenarios**
 * Requirements**
 * Prioritization (MoSCoW)**
 * Concrete scenarios**
 * Use Cases**
 * Documenting Scenarios**

• User studies to gain information about PACT as existing • Scenarios derived from user studies to tell stories of design issues • Requirements derived from scenarios to highlight must, should, could, want needs • On to redesign! • A good intermediate step before finalization of new design • Basic functionality of new design represented, made tangible and accessible • Used to sell requirements, evaluate whether they’ve been met before finalizing • Obtaining feedback on design concepts • Deciding among alternative options • Assessing usability in practice • Avoiding huge redesign mistakes • Involving users and achieving buy-in • Quickly assembled, changed, destroyed • Readily accessible, cheap materials (paper and others…) • Focus on broad ideas vs. details • Does not lead user to believe this is actually final - open to change • Robust - often handled by many, must be overengineered • Scope - broad issues and concerns - details not important at this stage • Level of instruction - too much guidance can skew results • Flexibility - users should have ability to edit/annotate prototype on the fly • Especially useful for electronic products/services, but physical form prototypes also count • Strong attention to detail, final form • Might be misconstrued as the real, final thing - any errors cause negative reactions, expectations raised to unattainable levels • Participatory design - getting users participating as co-designers • Roots in Scandinavia - why? • User buy-in to design explicitly encouraged - little concern that user needs aren’t being met • Time consuming w/multiple levels of buy-in • Might be hard to expect non-expert users to have valid opinion • PICTIVE method at Island Pacific School - low-fi prototype of JavaBean application development • Some guidance, but student-driven • Simple, broad data objects manipulated physically - kids could make new ones • Led to hi-fi developed prototypes designed by programmers and returned to students • Intention • Metrics • People • Activities • Context • Technology • Why evaluate? What are you trying to prove? • Early design prototypes - generally to scope out alternatives • Later - more attention to detail, specific features, specific usability problems • Measuring success/improvement/failure • Should be tied to evaluation of redesign and alternatives - peripheral data should be limited • Effectiveness - task completion, error rates, ease of learning, memorability • Efficiency - time to task completion, on non-productive steps, learning • Satisfaction - general aesthetic and emotional feel, voluntary use, frequency of use • Should be done with intended audience, doing specified activities in specified context as defined in scenarios • Low-fi prototype - still investigative • Hi-fi - specific questions and tasks, as close to final product as possible • Use scenarios and task analysis to provide users context - more efficient • Estimate time to completion first to see if users are matching expectations • Encourage user feedback in process of use - keep them talking • Debrief with broader questions Class 9: Scenarios and Requirements •A narrative that is accessible and useful to all stakeholders (designers and users) •A narrative that outlines complexity to designers •A narrative that envelops full complexity of design in context
 * Class 10: Prototypes and Evaluation**
 * The story so far…**
 * Prototypes**
 * Why prototypes?**
 * Lo-fi prototypes**
 * Lo-fi Issues**
 * Hi-fi prototypes**
 * Prototypes and PD**
 * IPS Example**
 * IMPACT evaluation model**
 * Intention**
 * Metrics and Measures**
 * Effectiveness, Efficiency and Satisfaction**
 * PACT and evaluation**
 * Guidelines for User Evaluation**
 * CCT 333: Imagining the Audience in a Wired World**
 * Scenarios**

•Action and reflection balance •Fluidity and concreteness balance •Envelops external factors/constrains •Allows for understanding of many effects at many levels •Can build scientific understanding (grounded theory) •Anecdotes, observation, interview transcripts etc. •As close to user’s direct experience as possible - ideally in their own words expressing their own issues •Left just at this level - just a collection of interesting stories, no attempt at creating common themes •Identification of common themes and problems •Builds conceptual models •Used for generating ideas for design alternatives, specifying requirements •One level of abstraction from user stories - but does not yet address how technologies resolve issues •From user stories and conceptual scenarios, done can build a list of what the technology should (and should not) be or do •Functional (e..g., task oriented - what it does) or non-functional (e.g., aesthetic, legal, cultural, ease of use issues) •Must have (without this, it’s useless) •Should have (would be a clear value-added requirement but will work without it) •Could have (might be nice but not essential) •Want but Won’t have (can wait until future iterations) •Functional - must have - non-functional, should/could/want to have •Defines requirements from conceptual scenarios more concretely •Builds physical/prototypical models •Starts involving technologies and interaction patterns at a general level •May be many concrete scenarios from one conceptual scenario
 * Scenario elements**
 * User Stories**
 * Conceptual scenarios**
 * Requirements**
 * Prioritization (MoSCoW)**
 * Concrete scenarios**

•Formalized interaction patterns •All design questions resolved •Can be modeled using formal procedures and language (e.g., UML) •In software, this is “pseudocode” - in hardware, the first functional iteration •Important to collect user stories - and from this, build conceptual scenarios from which concrete scenarios can be derived •Documentation of stories through text and other media - the wikis will help there… •HIC example in text quite good - scenarios of playing an MP3 on a home information system, with annotations of potential issues •Prototypes and evaluation - seeing if you got it right (and what to do if you haven’t…) •Identification of scenarios and requirements in projects Class 10: Prototypes and Evaluation •User studies to gain information about PACT as existing •Scenarios derived from user studies to tell stories of design issues •Requirements derived from scenarios to highlight must, should, could, want needs •On to redesign! •A good intermediate step before finalization of new design •Basic functionality of new design represented, made tangible and accessible •Used to sell requirements, evaluate whether they’ve been met before finalizing
 * Use Cases**
 * Documenting Scenarios**
 * Next week**
 * CCT 333: Imagining the Audience in a Wired World**
 * The story so far…**
 * Prototypes**

•Obtaining feedback on design concepts •Deciding among alternative options •Assessing usability in practice •Avoiding huge redesign mistakes •Involving users and achieving buy-in
 * Why prototypes?**

•Quickly assembled, changed, destroyed •Readily accessible, cheap materials (paper and others…) •Focus on broad ideas vs. details •Does not lead user to believe this is actually final - open to change •Robust - often handled by many, must be overengineered •Scope - broad issues and concerns - details not important at this stage •Level of instruction - too much guidance can skew results •Flexibility - users should have ability to edit/annotate prototype on the fly •Especially useful for electronic products/services, but physical form prototypes also count •Strong attention to detail, final form •Might be misconstrued as the real, final thing - any errors cause negative reactions, expectations raised to unattainable levels •Participatory design - getting users participating as co-designers •Roots in Scandinavia - why? •User buy-in to design explicitly encouraged - little concern that user needs aren’t being met •Time consuming w/multiple levels of buy-in •Might be hard to expect non-expert users to have valid opinion
 * Lo-fi prototypes**
 * Lo-fi Issues**
 * Hi-fi prototypes**
 * Prototypes and PD**

•PICTIVE method at Island Pacific School - low-fi prototype of JavaBean application development •Some guidance, but student-driven •Simple, broad data objects manipulated physically - kids could make new ones •Led to hi-fi developed prototypes designed by programmers and returned to students •Intention •Metrics •People •Activities •Context •Technology •Why evaluate? What are you trying to prove? •Early design prototypes - generally to scope out alternatives •Later - more attention to detail, specific features, specific usability problems •Measuring success/improvement/failure •Should be tied to evaluation of redesign and alternatives - peripheral data should be limited •Effectiveness - task completion, error rates, ease of learning, memorability •Efficiency - time to task completion, on non-productive steps, learning •Satisfaction - general aesthetic and emotional feel, voluntary use, frequency of use •Should be done with intended audience, doing specified activities in specified context as defined in scenarios •Low-fi prototype - still investigative •Hi-fi - specific questions and tasks, as close to final product as possible •Use scenarios and task analysis to provide users context - more efficient •Estimate time to completion first to see if users are matching expectations •Encourage user feedback in process of use - keep them talking •Debrief with broader questions •Application to computer-supported collaborative work •Student survey - volunteer to drop it off? •Feel free to do this one too: http://www.ratemyprofessors.com/ShowRatings.jsp?tid=732035
 * IPS Example**
 * IMPACT evaluation model**
 * Intention**
 * Metrics and Measures**
 * Effectiveness, Efficiency and Satisfaction**
 * PACT and evaluation**
 * Guidelines for User Evaluation**
 * Survey/Next Week**

Class 11: Special Topics: Supporting collaboration in group work •CSCW - structured group communication technologies to share knowledge •Knowledge management as emergent field in computing •But it is all just computer-based? Broader definitions of technology apply here - organizational learning often not computer mediated •Forming - figuring out task, administrative requirements - often tenuous, anxiety high •Storming - brainstorming, conflict, can be rebellious •Norming - cohesion and stability arrive, norms for conflict resolution •Performing - task-directed work •(Decay - task done, group dissolves) •People have urge to belong, will comply to stated norms (even to the point of extremes - Zimbardo study and current examples) •Power of suggestion from opinion leaders - Auch study of line length •Both can easily lead to groupthink and mob mentality if not monitored •“tragedy of the commons” - the larger the group, the more individuals might feel it possible to act in a selfish manner and/or be lazy •Social compensation - group cohesion and leadership role might increase commitment even in the face of loafing •Can allow for individual contributions to be heard equally, but leadership still emerges over time •Accountability, anonymity and loafing •Can perhaps be isolating, taking longer to form group cohesion
 * CCT 333: Imagining the Audience in a Wired World**
 * Collaboration Technologies**
 * Group Communication Concepts**
 * Compliance and Conformity**
 * Social Loafing and Compensation**
 * Computing and groupwork**

•Who is doing what where when and why? •Any technology for collaboration has to answer at least some of these questions - without answers, it’s hard to collaborate •Privacy and disruption issues •Challenges common to many design issues, not just CSCW •FSAE racecar study: very much about resolving these issues technologically and organizationally •Required organizational buy-in - integration into politics of space •Sharing information takes time •If costs of sharing outweigh benefits, people quickly stop •Short and long term cost-benefit •FSAE: report writing, testing procedures issues - but also extraordinary examples of information sharing •Collaboration technologies must be used by critical mass, or it ceases to be effective •Too much mass can be confusing though - email in particular •FSAE: Email system (over)use and database issues, also importance of f2f •Technologies for collaboration exist within social and political contexts and often influences them - new technologies can create enemies quickly •FSAE: issues in selling safety and testing procedures •Computing technologies in particular - rule driven and formalized •Humans - random, contingent, able to sort out new ideas on the fly •Handling exceptions necessary •FSAE: move to searchable full text data vs. formal database architecture •Some technologies compel sharing, create significant demands on time •Individual work important - sharing should supplement but not trump it •FSAE: meetings - make them efficient, necessary and few •Hard to tell if a particular collaboration strategy is working •What works changes over time and changes in organizational culture •FSAE: annual reports with recommendations, some of which were contradictory •People intuitively know how to represent information - shared representations are harder without common language and symbols though •FSAE: issues in notation of data, storage of shared resources; use of physical models as instructive tools •Without organizational buy-in, even the best technologies may fail •FSAE: buy-in at leader and faculty advisor level, but also on the ground level; management by walking around •Same time/place - meetings, support tools •Same time, different place - IM, collaborative whiteboards, tele/videoconferencing •Different time, same place - project management artefacts, post-its •Different time/place - discussion forums, email, wikis •Meetings •Artefacts •Lab notebooks •Trial and Error •Email •Alumni contact •Industry contact •Telephone
 * Awareness, Technology and Groupwork**
 * Grudin’s 8 Challenges**
 * Who works, who benefits?**
 * Critical Mass**
 * Political and Social Factors**
 * Exception Handling**
 * Group Communication as Exception**
 * Evaluating success**
 * Collective intuitiveness**
 * Managing acceptance**
 * Technologies and Collaboration**
 * A few FSAE learning mechanisms**

•Past Reports •Books/articles •Gossip and Informal Chat •Reverse Engineering •Corporate Intelligence

•Some argue for specialized technologies (e.g., integrated databases like Notes, PeopleSoft) •Can be powerful and tailored to org. needs •Can be expensive to design and maintain •FSAE: very little financial or human resources to maintain complex systems, off the shelf components easier to coordinate and use •Multiple avenues of learning •Avenues can contradict each other •Individual preferences play a role •Information generally follows path of least resistance (for better or worse) •Design for multiple complimentary channels, minimizing noise or error but encouraging discussion and debate •David with a bit on CSCL, and test discussion •Presentations - come ready to show off your redesign work and answer questions
 * Specialized vs. Common Technologies**
 * The “right” mix?**
 * Next week**

Class 12: A very brief bit on CSCL and Final Test Discussion •What types of learning technologies exist? •Online learning technologies - examples?
 * CCT 333: Imagining the Audience in a Wired World**
 * CSCL Survey**

•Experience with VISTA here - examples? •Implementation issues
 * The VISTA case**

•User study - who would you target? Why? What methods? •MoSCoW requirements for VISTA - what must exist? What do you think should/could be incorporated? •Redesign ideas…
 * Redesigning VISTA**

•5 MC questions (approx. 15%) •3 “fill in the box” type questions (approx. 50%) •Sample “fill in the box” question on Wiki (this will not be used) •Essay (approx. 35%): application of range of concepts to problem, sectioned answer - simliar to final project but less involved. •MC questions are not tricks, but not obnoxiously easy either •Point form for box questions - how to do point form right! •If you don’t know, guess - give me nothing, I can only give you nothing back. •Timing is everything - budget time according to marks alloted for test •Final test (good luck!) •“Documentation Wiki” - really finalizing what you’ve done, cleaning things up, making evidence of process clear - do this by Friday •No labs, obviously - after test, we’re done!
 * Final test questions**
 * Hints**
 * Next week**