Lecture 14 — Heuristic Evaluation - Why and How | HCI Course | Stanford University

Artificial Intelligence - All in One
15 May 201616:42

Summary

TLDRThe video introduces heuristic evaluation, a technique for identifying usability issues in software design. Created by Jakob Nielsen, it involves experts using a set of principles to critique a design, which can be applied at various stages of the design process. The method is cost-effective, quick to perform, and beneficial for catching severe problems, though it may generate false positives. The script explains the evaluation process, the importance of multiple evaluators, and how to use the findings to improve design, emphasizing the technique's value alongside other feedback methods.

Takeaways

  • 🔍 Heuristic evaluation is a critique-based approach for identifying usability issues in software design, often using a set of principles or heuristics.
  • 👥 It is valuable to receive peer critique at various stages of the design process, such as before user testing, redesigning, and before software release.
  • 📈 Heuristic evaluation was created by Jakob Nielsen and is a cost-effective method for finding usability problems, with a high benefit-cost ratio.
  • 📝 The technique involves having evaluators independently assess the design against a set of heuristics and then discuss their findings collectively.
  • 🛠 It can be applied to both working user interfaces and sketches, making it compatible with rapid prototyping and low-fidelity designs.
  • 🔑 Nielsen's 10 heuristics serve as a good starting point, but they can be customized or expanded based on the specific needs of the system being evaluated.
  • 🤔 The process begins with setting clear goals, even if the findings might be unexpected, and involves giving evaluators tasks to perform with the design.
  • 📊 Using multiple evaluators helps in finding a wider range of problems, with each evaluator potentially identifying unique issues.
  • 📈 The benefit of adding more evaluators decreases over time, with a general recommendation of 3-5 evaluators for optimal results.
  • 🚫 Heuristic evaluation might generate false positives that wouldn't occur in real user testing, which is why it's important to combine it with other methods.
  • 📝 After evaluation, a debrief session with the design team is crucial for discussing the findings, estimating fix efforts, and brainstorming improvements.

Q & A

  • What is heuristic evaluation?

    -Heuristic evaluation is a technique for finding usability problems in a design, where evaluators use a set of principles or heuristics to identify issues in the user interface.

  • Who created heuristic evaluation?

    -Heuristic evaluation was created by Jakob Nielsen and colleagues about 20 years ago.

  • Why is heuristic evaluation valuable?

    -Heuristic evaluation is valuable because it allows for quick feedback on a design with a high return on investment, and it can be used with both working user interfaces and sketches.

  • What are some ideal stages to conduct heuristic evaluation in the design process?

    -Heuristic evaluation can be particularly valuable before user testing, before redesigning an application, when needing data to convince stakeholders, and before releasing software for final refinements.

  • What is the purpose of using multiple evaluators in heuristic evaluation?

    -Using multiple evaluators helps to find a wider range of problems due to the diversity of perspectives, which can increase the effectiveness of the evaluation.

  • What is the recommended number of evaluators for heuristic evaluation according to Jakob Nielsen?

    -Jakob Nielsen suggests that three to five evaluators tend to work well for heuristic evaluation, balancing the cost and the number of problems found.

  • How does heuristic evaluation compare to user testing in terms of speed and interpretation?

    -Heuristic evaluation is often faster than user testing as it requires less setup and the results are pre-interpreted, providing direct feedback on problems and solutions.

  • What are some potential drawbacks of heuristic evaluation compared to user testing?

    -Heuristic evaluation might generate false positives that wouldn't occur in a real user environment, whereas user testing is more accurate but can be more time-consuming and resource-intensive.

  • What is the significance of severity ratings in heuristic evaluation?

    -Severity ratings help prioritize which problems to fix first by considering the frequency, impact, and pervasiveness of the issues found during the evaluation.

  • How should evaluators report the problems they find during heuristic evaluation?

    -Evaluators should report problems specifically, relating them to one of the design heuristics, and provide detailed descriptions to help the design team understand and address the issues efficiently.

  • What is the final step in the heuristic evaluation process after identifying and rating problems?

    -The final step is to debrief with the design team to discuss the findings, estimate the effort required to fix issues, and brainstorm future design improvements.

Outlines

00:00

🔍 Introduction to Heuristic Evaluation

This paragraph introduces the concept of heuristic evaluation, a method of software evaluation that involves experts using a set of principles or heuristics to identify usability issues in a design. It contrasts this with empirical methods like user testing and formal methods that involve predictive modeling of user behavior. The speaker emphasizes the value of peer critique at various stages of the design process, such as before user testing to refine the interface and after to guide redesign efforts. The paragraph also highlights the importance of having a clear goal in any evaluation process and introduces heuristic evaluation as a technique created by Jakob Nielsen, which is cost-effective and can be applied at any stage of interface design, including with low-fidelity prototypes.

05:00

📚 Heuristic Evaluation Process and Benefits

The second paragraph delves into the specifics of the heuristic evaluation process, explaining how evaluators independently assess the design against a set of heuristics and then discuss their findings collectively. It underscores the value of having multiple evaluators to capture a wide range of usability issues, using a graph adapted from Jakob Nielsen's work to illustrate the diminishing returns of adding evaluators. The speaker also discusses the cost-effectiveness of heuristic evaluation, comparing it to user testing in terms of speed and interpretation of results, and notes that while heuristic evaluation can generate false positives, it is a valuable method for identifying severe problems quickly.

10:02

👥 Multi-Evaluator Approach and Severity Ratings

This paragraph discusses the rationale behind using multiple evaluators in the heuristic evaluation process, noting that different evaluators will find different problems, and no single evaluator will identify every issue. It explains the process of assigning severity ratings to identified problems, both individually and then collectively, to prioritize fixes. The speaker also touches on the importance of providing evaluators with realistic background information and training, as well as the need for specificity when listing problems and the consideration of missing elements in the design.

15:03

🛠️ Severity Rating System and Debriefing

The final paragraph outlines the severity rating system created by Nielsen, which ranges from zero to four, with zero indicating no usability problem and four signifying a critical issue. It describes how evaluators consider the frequency, impact, and pervasiveness of a problem when assigning a severity rating. The paragraph concludes with the importance of a debriefing session with the design team to discuss the findings, estimate the effort required to fix issues, and brainstorm future design improvements. This session is presented as an opportunity to address problems efficiently and to keep all stakeholders informed and engaged.

Mindmap

Keywords

💡Heuristic Evaluation

Heuristic Evaluation is a usability testing method where experts evaluate a user interface based on a set of heuristics or principles. It is a quick and cost-effective way to identify usability issues in design. In the video, it is introduced as a technique created by Jakob Nielsen and is emphasized for its efficiency and high value for identifying problems before user testing.

💡Empirical Methods

Empirical Methods refer to the process of evaluating software through observation and experimentation, such as having real users interact with the software. The video mentions empirical methods as one of the various ways to evaluate software, contrasting them with heuristic evaluation which does not require actual user testing.

💡Formal Methods

Formal Methods in the context of the video involve creating a model of user behavior to predict how different user interfaces will perform. This is a more structured approach compared to heuristic evaluation and is used to simulate user interactions before actual testing takes place.

💡Simulation

Simulation in the video script refers to the use of automated tests to evaluate user interfaces, particularly for detecting usability issues at a low level. It is mentioned as an alternative to heuristic evaluation, especially when dealing with more complex or high-level design elements.

💡Peer Critique

Peer Critique is the process of receiving feedback from fellow designers or experts, which is highlighted in the video as an effective form of feedback in design classes and can be applied at any stage of the design process. It is used to refine designs and is particularly valuable before user testing and redesigning.

💡User Testing

User Testing involves having actual users try out the software to identify usability issues. The video discusses the importance of conducting user testing and how heuristic evaluation can complement this method by identifying issues beforehand, thus making user testing more efficient.

💡Heuristics

In the context of the video, Heuristics are usability principles used by evaluators during heuristic evaluation to identify problems in a design. They are guidelines that help in the quick assessment of a user interface's usability, with Jakob Nielsen's 10 heuristics being a widely recognized set.

💡Cost-Benefit Ratio

Cost-Benefit Ratio in the video refers to the financial assessment of the value gained from heuristic evaluation compared to its cost. It is used to demonstrate the effectiveness of heuristic evaluation, with Jakob Nielsen estimating a significant benefit-to-cost ratio for the technique.

💡False Positives

False Positives are issues identified during heuristic evaluation that may not actually be problems in a real user environment. The video mentions this as a potential drawback of heuristic evaluation, contrasting it with the more accurate but slower user testing.

💡Severity Rating

Severity Rating is a system used in heuristic evaluation to assess the importance of identified usability problems. The video describes a scale from zero to four, developed by Jakob Nielsen, to help prioritize which issues need to be addressed first based on their impact and pervasiveness.

💡Debrief Session

A Debrief Session, as mentioned in the video, is a meeting with the design team after the heuristic evaluation to discuss the findings, suggest improvements, and estimate the effort required to fix identified issues. It serves as a critical step in the iterative design process.

Highlights

Introduction to heuristic evaluation as a technique for evaluating software usability.

Comparison of empirical methods, formal methods, simulation, and critique-based approaches in software evaluation.

The value of peer critique in the design process for improving software designs.

The optimal stages for incorporating peer critique in the design process.

The importance of having a clear goal in evaluation, even when outcomes may be unexpected.

Heuristic evaluation created by Jakob Nielsen, focusing on finding usability problems in design.

The efficiency and cost-effectiveness of heuristic evaluation compared to other methods.

How heuristic evaluation works with paper prototypes and low-fidelity techniques.

Nielsen's 10 heuristics as a foundation for evaluating user interface design.

The process of heuristic evaluation involving independent review and group discussion.

The benefits of using multiple evaluators and the 'wisdom of crowds' effect.

Jakob Nielsen's rule of thumb for the number of evaluators in heuristic evaluation.

The cost-benefit analysis of heuristic evaluation versus user testing.

The steps involved in conducting a heuristic evaluation, including evaluator training and scenario setup.

The importance of specificity in listing problems found during heuristic evaluation.

Assigning severity ratings to usability issues and the factors considered in this process.

Debriefing with the design team to discuss and address the findings from heuristic evaluation.

The role of heuristic evaluation in the broader context of user interface design and testing.

Transcripts

play00:00

in this video we're going to introduce a

play00:02

technique called heuristic evaluation as

play00:06

we talked about at the beginning of the

play00:08

course there's lots of different ways to

play00:09

evaluate software one that you may be

play00:12

most familiar with those empirical

play00:13

methods where a some level of formality

play00:16

you have actual people trying out your

play00:18

software it's also possible to have

play00:21

formal methods where you're building a

play00:24

model of how people behave as a

play00:26

particular situation and that enables

play00:28

you to predict how different user

play00:30

interfaces will work or if you can't

play00:34

build a closed-form formal model you can

play00:36

also try out your interface with

play00:38

simulation and have automated tests that

play00:41

can detect usability bugs and effective

play00:43

designs this works especially well for

play00:46

low level stuff it's harder to do for

play00:48

higher level stuff and what we're going

play00:50

to talk about today is critique based

play00:52

approaches where people are giving you

play00:54

feedback directly based on their

play00:56

expertise or a set of heuristics as any

play01:00

of you who have ever taken an art or

play01:02

design class now peer critique can be an

play01:04

incredibly effective form of feedback

play01:06

and it can help you make your designs

play01:08

even better you can get peer critique

play01:10

really at any stage of a design process

play01:12

but I'd like to highlight a couple that

play01:14

I think it can be particularly valuable

play01:16

first it's really valuable to get peer

play01:19

critique before user testing because

play01:22

that helps you not waste your users on

play01:24

stuff that's just going to get picked up

play01:26

automatically you want to be able to

play01:27

focus the valuable resources of user

play01:30

testing on stuff that other people

play01:31

wouldn't be able to pick up on the rich

play01:34

qualitative feedback that peer critique

play01:36

provides can also be really valuable

play01:38

before redesigning your application

play01:40

because what it can do is it can show

play01:42

you what parts of your app you probably

play01:44

want to keep and what are other parts

play01:46

that are more problematic and deserve

play01:48

redesign third sometimes you know there

play01:51

are problems and you need data to be

play01:52

able to convince other stakeholders to

play01:54

make the changes and peer critique can

play01:57

be a great way especially if it's

play01:59

structured to be able to get the

play02:01

feedback that you need to make the

play02:03

changes that you know need to happen and

play02:05

lastly this kind of structured peer

play02:08

critique can be really valuable before

play02:09

releasing software because it helps you

play02:12

do a final sanding of being

play02:13

tire design and smooth out any rough

play02:15

edges as with most types of evaluation

play02:18

it's usually helpful to begin with a

play02:20

clear goal even if what you ultimately

play02:22

learn is completely unexpected and so

play02:26

what we're going to talk about today is

play02:28

a particular technique called heuristic

play02:30

evaluation heuristic evaluation was

play02:32

created by Jakob Nielsen and colleagues

play02:33

about 20 years ago now and the goal of

play02:37

heuristic evaluation is to be able to

play02:39

find usability problems in a design I

play02:42

first learned about heuristic evaluation

play02:44

when ITA James land a is intro HCI

play02:47

course and I've been using it and

play02:48

teaching it ever since it's a really

play02:50

valuable technique because it lets you

play02:52

get feedback really quickly and it's a

play02:55

high bang for the buck strategy and the

play02:58

slides that I have here are based off

play02:59

James's slides for this course and the

play03:02

materials are all available on jakob

play03:04

nielsen's website the basic idea of

play03:07

heuristic evaluation is that you're

play03:09

going to provide a set of people often

play03:11

other stakeholders on the design team or

play03:13

outside design experts with a set of

play03:16

heuristics or principles and they're

play03:18

going to use those to look for problems

play03:21

in your design each of them is first

play03:25

going to do this independently so

play03:26

they'll walk through a variety of tasks

play03:28

using your design to look for these bugs

play03:31

and you'll see you know that different

play03:34

evaluators are gonna find different

play03:35

problems and then they're gonna

play03:37

communicate and talk together only at

play03:39

the end afterwards at the end of the

play03:44

process they're going to get back

play03:45

together and talk about what they found

play03:47

and this independent first gather

play03:50

afterwards is how you get a wisdom of

play03:52

crowds benefit and having multiple

play03:54

evaluators and one reason that we're

play03:57

talking about this early in the class is

play03:59

that it's a technique that you can use

play04:00

either on a working user interface or on

play04:03

a sketches of user interfaces and so

play04:05

heuristic evaluation works really well

play04:07

in conjunction with paper prototypes and

play04:11

other rapid low fidelity techniques that

play04:13

you may be using to get your design

play04:15

ideas out quick and fast

play04:18

here's Nielsen's 10 heuristics and

play04:20

they're a pretty darn good set that said

play04:23

there's nothing magic about these

play04:24

heuristics they do a pretty good job of

play04:26

cover

play04:27

many of the problems that you'll see in

play04:29

many user interfaces but you can add on

play04:32

any that you want and get rid of any

play04:34

that aren't appropriate for your system

play04:37

we're going to go over the content of

play04:39

these ten heuristics in the next couple

play04:40

lectures and in this lecture I'd like to

play04:42

introduce the process that you're going

play04:44

to use with these heuristics so here's

play04:47

what you're gonna have your evaluators

play04:48

do give them a couple of tasks to use

play04:51

your design for and have them do each

play04:53

task stepping through carefully several

play04:56

times when they're doing this they're

play04:59

going to keep the list of usability

play05:00

principles as a reminder of things to

play05:02

pay attention to

play05:03

now which principles will you use I

play05:05

think Nielsen's ten heuristics are a

play05:07

fantastic start and you can augment

play05:10

those with anything else that's relevant

play05:12

for your domain so if you have

play05:14

particular design goals that you would

play05:16

like your design to achieve include

play05:18

those in the list or if you have

play05:20

particular goals that you've set up from

play05:22

competitive analysis of designs that are

play05:25

out there already that's great too or if

play05:27

there are things that you've seen your

play05:30

or other designs excel at those are

play05:33

important goals too and can be included

play05:35

in your list of heuristics and then

play05:39

obviously the important part is that

play05:40

you're going to take what you've learned

play05:41

from these evaluators and use those

play05:43

violations of your heuristics as a way

play05:46

of fixing problems and redesigning let's

play05:50

talk a little bit more about why you

play05:51

might want to have multiple evaluators

play05:53

rather than just one the graph on the

play05:56

slide is adapted from jakob nielsen's

play05:58

work on heuristic evaluation and what

play06:00

you see is each black square is a bug

play06:03

that a particular evaluator found an

play06:08

individual evaluator represents a row of

play06:11

this matrix and there's about twenty

play06:13

evaluators in this set the columns

play06:15

represent the problems and what you can

play06:17

see is that there's some problems that

play06:19

were find by relatively few evaluators

play06:21

and other stuff which almost everybody

play06:24

found so we're going to call the stuff

play06:25

on the right the easy problems and the

play06:27

stuff on the left hard problems and so

play06:31

in aggregate what we can say is that no

play06:32

evaluator found every problem and some

play06:36

evaluators found more than others and so

play06:39

there are better and worse people to do

play06:42

so why not have lots of evaluators well

play06:45

as you add more evaluators they do find

play06:48

more problems but it kind of tapers off

play06:50

over time you lose that benefit

play06:52

eventually and stuff from a cost-benefit

play06:55

perspective it just stops making sense

play06:57

after a certain point so where's the

play06:59

peak of this curve it's of course going

play07:01

to depend on the user interface that

play07:03

you're working with how much you're

play07:04

paying people how much time is involved

play07:06

all sorts of factors jakob nielsen's

play07:09

rule of thumb for these kinds of user

play07:11

interfaces and here's to evaluation is

play07:13

that three to five people tends to work

play07:16

pretty well and that's been my

play07:17

experience too and I think that

play07:21

definitely one of the reasons that

play07:22

people use heuristic evaluation is

play07:24

because it can be an extremely cost

play07:26

effective way of finding problems in one

play07:29

study that Jakob Nielsen ran he

play07:32

estimated that the cost of the problems

play07:34

found with heuristic evaluation were

play07:36

$500,000 and the cost of performing it

play07:39

was just over ten thousand dollars and

play07:41

so he estimates a 48 fold benefit cost

play07:46

ratio for this particular user interface

play07:48

obviously these numbers are

play07:51

back-of-the-envelope and your mileage

play07:53

will vary you can think about how to

play07:56

estimate the benefit that you get from

play07:58

something like this if you have an

play08:00

in-house software tool using something

play08:01

like productivity increases that if

play08:04

you're making an expense reporting

play08:06

system or other in-house system that

play08:08

will make people's time more efficiently

play08:11

used that's a big usability win and if

play08:14

you've got software that you're making

play08:16

available on the open market you can

play08:18

think about the benefit from sales or

play08:21

other measures like that one thing that

play08:24

we can get from that graph is that

play08:25

evaluators are more likely to find

play08:27

severe problems and that's good news and

play08:29

so with a relatively small number of

play08:31

people

play08:32

you're pretty likely to stumble across

play08:34

the most important stuff however as we

play08:37

saw with just one person in this

play08:39

particular case you know even the best

play08:42

evaluator found only about 1/3 of the

play08:44

problems at the system and so that's why

play08:47

ganging up a number of evaluators say 5

play08:50

is going to get you most of the benefit

play08:53

that you'll be able to

play08:55

if we compare a heuristic evaluation and

play08:58

use you're testing one of the things

play08:59

that we see is that heuristic evaluation

play09:01

can often be a lot faster it takes just

play09:04

an hour to versus for an evaluator and

play09:07

the mechanics of getting a user test up

play09:10

and running can take longer not even

play09:12

accounting for the fact that you may

play09:14

have to build software also the

play09:18

heuristic evaluation results come pre

play09:21

interpreted because your evaluators are

play09:23

directly providing you with problems and

play09:25

things to fix and so it saves you the

play09:27

time of having to infer from the

play09:30

usability tests what might be the

play09:32

problem or solution now conversely

play09:36

experts walking through your system can

play09:39

generate false positives that wouldn't

play09:41

actually happen in a real environment

play09:44

and this indeed does happen and so user

play09:46

testing is sort of by definition going

play09:48

to be more accurate at the end of the

play09:53

day I think it's valuable to alternate

play09:54

methods all of the different techniques

play09:56

that you'll learn and cut in this class

play09:58

for getting feedback can each be

play09:59

valuable and that cycling through them

play10:02

you can often get the the benefits of

play10:04

each and that can be because with yours

play10:07

to evaluation and user testing you'll

play10:09

find different problems and by running a

play10:13

Qi or something like that early in the

play10:15

design process you avoid wasting real

play10:18

users that you may bring in later on so

play10:22

now that we've seen the benefits what

play10:24

are the steps the first thing to do is

play10:27

to get all of your evaluators up to

play10:29

speed on what the story is behind your

play10:32

software any necessary domain knowledge

play10:34

they might need and tell them about the

play10:37

scenario that you're gonna have them

play10:38

step through then obviously you have the

play10:42

evaluation phase where people are

play10:44

working through the interface afterwards

play10:46

each person is going to assign severity

play10:50

rating and you do this individually

play10:52

first and then you're going to aggregate

play10:54

those into a group severity rating and

play10:56

produce an aggregate report out of that

play10:58

and finally once you've got this

play11:03

aggregated report you can share that

play11:05

with the design team and the design team

play11:07

team can discuss with

play11:08

do with that doing this kind of expert

play11:11

review can be really taxing and so for

play11:13

each of the scenarios that you lay out

play11:15

in your design it can be valuable to

play11:17

have the evaluator go through that

play11:19

scenario twice the first time they'll

play11:23

just get a sense of it and the second

play11:25

time they can focus on more specific

play11:27

elements if you've got some walk up and

play11:32

used system like a ticket machine

play11:34

somewhere then you may want to not give

play11:37

people any background information at all

play11:39

because if you've got people that are

play11:40

just getting off the bus or the train

play11:42

and they walk up to your machine without

play11:44

any prior information that's the

play11:46

experience you want your evaluators to

play11:48

have on the other hand if you're gonna

play11:50

have a genomic system or other expert

play11:52

user interface you'll want to make sure

play11:54

that whatever training you would give to

play11:56

real users

play11:57

you're gonna give to your evaluators as

play11:59

well in other words whatever the

play12:01

background is it should be realistic

play12:05

when your evaluators are walking through

play12:07

your interface it's going to be

play12:09

important to produce a list of very

play12:10

specific problems and explain those

play12:14

problems with regard to one of the

play12:16

design heuristics you don't want people

play12:17

to just be like I don't like it and in

play12:22

order to maximally preach you these

play12:24

results for the design team you'll want

play12:27

to list each one of these separately so

play12:28

that they can be dealt with efficiently

play12:31

separate listings can also help you

play12:33

avoid listing the same repeated problem

play12:36

over and over again if there's a

play12:38

repeated element on every single screen

play12:40

you don't want to list it at every

play12:42

single screen you want to list it once

play12:44

so that it can be fixed once and these

play12:47

problems can be very detailed like the

play12:50

name of something is confusing or it can

play12:53

be something that has to do more with

play12:54

the flow of the user interface or the

play12:56

architecture of the user experience and

play12:58

that's not specifically tied to an

play13:00

interface element your evaluators may

play13:04

also find that something's missing that

play13:06

ought to be there and this can be

play13:08

sometimes ambiguous with early

play13:09

prototypes like paper prototypes and so

play13:12

you'll want to clarify whether the user

play13:14

interface is something that you believe

play13:16

to be complete or whether there are

play13:18

intentionally elements missing ahead of

play13:21

and of course sometimes there are

play13:24

features that are going to be obviously

play13:25

there that are implied by the user

play13:27

interface and so mellow out and relax on

play13:30

those after your evaluators have gone

play13:36

through the interface they can each

play13:37

independently assign a severity rating

play13:39

to all of the problems that they found

play13:40

and that's going to enable you to

play13:42

allocate resources to fix those problems

play13:44

it can also help give you feedback about

play13:47

how well you're doing in terms of the

play13:49

usability of your system in general and

play13:51

give you a kind of benchmark of your

play13:53

efforts in this vein the severity

play13:57

measure that your evaluators are going

play13:58

to come up with is going to combine

play14:00

several things it's going to combine the

play14:03

frequency the impact and the

play14:06

pervasiveness of the problem that

play14:07

they're seeing on the screen so

play14:09

something that is only in one place

play14:12

maybe a less big deal than something

play14:15

that shows up throughout the entire user

play14:16

interface similarly there going to be

play14:20

some things like misaligned text which

play14:23

may be inelegant but aren't a

play14:25

deal-killer in terms of your software

play14:28

and here's the severity rating system

play14:31

that Nielsen created you can obviously

play14:33

use anything that you want it ranges

play14:35

from zero to four where zero is at the

play14:38

end of the day your evaluators decide

play14:40

it's not actually a usability problem

play14:42

all the way up to it being something

play14:44

really catastrophic that has to get

play14:46

fixed right away and here's an example

play14:50

of a particular problem that RTA Robbie

play14:52

found when he was taking CS 147 as a

play14:55

student he walked through somebody's

play14:57

mobile interface that had a weight entry

play15:00

element to it and he realized that once

play15:03

you'd entered your weight there was no

play15:04

way to edit it after the fact so that's

play15:08

kind of clunky you wish you could fix it

play15:10

maybe not a disaster and so what you see

play15:13

here is he's listed the issue he's given

play15:16

it a severity rating

play15:17

he's got the heuristic that it violates

play15:19

and then he describes exactly what the

play15:22

problem is and finally after all your

play15:25

evaluators have gone through the

play15:26

interface listed their problems and

play15:28

combined them in terms of the severity

play15:30

and importance you'll want to debrief

play15:32

with the design team

play15:33

this is a nice chance to be able to

play15:35

discuss general issues in the user

play15:37

interface and qualitative feedback and

play15:39

it gives you a chance to go through each

play15:41

of these line items and suggest

play15:42

improvements on how you can address

play15:44

these problems in this debrief session

play15:49

it can be valuable for the development

play15:50

team to estimate the amount of effort

play15:53

that it would take to fix one of these

play15:55

problems so for example if you've got

play15:57

something that is 1 on your severity

play16:00

scale not too big a deal it might have

play16:02

something to do with wording and it's

play16:03

dirt simple to fix that tells you go

play16:07

ahead and fix it conversely you may have

play16:09

something which is a catastrophe which

play16:11

takes a lot more effort but its

play16:13

importance will lead you to fix it and

play16:15

there's other things where the

play16:17

importance relative to the cost involved

play16:19

just don't make sense to deal with right

play16:21

now

play16:21

and this debrief session can be a great

play16:24

way to brainstorm future design ideas

play16:27

especially while you've got all the

play16:28

stakeholders in the room and the ideas

play16:30

about what the issues are with the user

play16:32

interface are fresh in their minds in

play16:34

the next two videos will go through

play16:36

Nielsen's 10 heuristics and talk more

play16:38

about what they mean

Rate This

5.0 / 5 (0 votes)

Related Tags
Heuristic EvaluationUser InterfaceSoftware DesignUsability TestingExpert FeedbackDesign ProcessUser ExperienceJakob NielsenCritique ApproachInterface Bugs