Full Transcript

11.1.1 You Are Here We begin each process chapter with a "you are here" picture of the chapter topic in the context of the overall Wheel lifecycle template; see Figure 11-1. Although prototyping is a kind of implementation, design and prototyping in practice often overlap and occur simultaneously. A...

11.1.1 You Are Here We begin each process chapter with a "you are here" picture of the chapter topic in the context of the overall Wheel lifecycle template; see Figure 11-1. Although prototyping is a kind of implementation, design and prototyping in practice often overlap and occur simultaneously. A prototype in that sense is a design representation. So, as you create the design and its representation, you are creating the prototype. Therefore, although in Figure 11-1 it might seem that prototyping is limited to a particular place within a cycle of other process activities, like all other activities, prototyping does not happen only at some point in a rigid sequence. 11.1.2 A Dilemma, and a Solution Have you ever rushed to deliver a product version without enough time to check it out? Then realized the design needed fixing? Sorry, but that ship has already left the station. The sooner you fail and understand why, the sooner you can succeed. As Frishberg (2006) tells us, "the faster you go, the sooner you know." If only you had made some early prototypes to work out the design changes before releasing it! In this chapter we show you how to use prototyping as a hatching oven for partially baked designs within the overall UX lifecycle process. Traditional development approaches such as the waterfall method were heavyweight processes that required enormous investment of time, money, and personnel.Thoselinear development processes have tended to force a commitment to significant amounts of design detail without any means for visualizing and evaluating the product until it was too late to make any major changes. Construction and modification of software by ordinary programming techniques in the past have been notoriously expensive and time-consuming activities. Little wonder there have been so many failed software development projects (Cobb, 1995; The Standish Group, 1994, 2001)---wrong requirements, not meeting requirements, imbalanced emphasis within functionality, poor user experience, and so much customer and user dissatisfaction. In thinking about how to overcome these problems, we are faced with a dilemma. The only way to be sure that your system design is the right design and that your design is the best it can be is to evaluate it with real users. However, at the beginning you have a design but no system yet to evaluate. But after it is implemented, changes are much more difficult. Enter the prototype. A prototype gives you something to evaluate before you have to commit resources to build the real thing. Because prototyping provides an early version of the system that can be constructed much faster and is less expensive, something to stand in stead of the real system to evaluate and inform refinement of the design, it has become a principal technique of the iterative lifecycle. Universality of prototyping The idea of prototyping is timeless and universal. Automobile designers build and test mockups, architects and sculptors make models, circuit designers use "bread-boards," artists work with sketches, and aircraft designers build and fly Figure 11-1 You are here; the chapter on prototyping in the context of the overall Wheel lifecycle template. 392 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE experimental designs. Even Leonardo da Vinci and Alexander Graham Bell made prototypes. Thomas Edison sometimes made 10,000 prototypes before getting just the right design. In each case the concept of a prototype was the key to affording the design team and others an early ability to observe something about the final product---evaluating ideas, weighing alternatives, and seeing what works and what does not. Alfred Hitchcock, master of dramatic dialogue design, is known for using prototyping to refine the plots of his movies. Hitchcock would tell variations of stories at cocktail parties and observe reactions of his listeners. He would experiment with various sequences and mechanisms for revealing the story line. Refinement of the story was based on listener reactions as an evaluation criterion. Psycho is a notable example of the results of this technique. Scandinavian origins Like a large number of other parts of this overall lifecycle process, the origins of prototyping, especially low-fidelity prototyping, go back to the Scandinavian work activity theory research and practice of Ehn, Kyng, and others (Bjerknes, Ehn, & Kyng, 1987; Ehn, 1988) and participatory design work (Kyng, 1994). These formative works emphasized the need to foster early and detailed communication about design and participation in understanding the requirements for that design. 11.2 DEPTH AND BREADTH OF A PROTOTYPE The idea of prototypes is to provide a fast and easily changed early view of the envisioned interaction design. To be fast and easily changed, a prototype must be something less than the real system. The choices for your approach to prototyping are about how to make it less. You can make it less by focusing on just the breadth or just the depth of the system or by focusing on less than full fidelity of details in the prototype (discussed later in this chapter). 11.2.1 Horizontal vs. Vertical Prototypes Horizontal and vertical prototypes represent the difference between slicing the system by breadth and by depth in the features and functionality of a prototype (Hartson & Smith, 1991). Nielsen (1987) also describes types of prototypes based on how a target system is sliced in the prototype. In his usability 393 PROTOTYPING engineering book (1993), Nielsen illustrates the relative concepts of horizontal and vertical prototyping, which we show as Figure 11-2. A horizontal prototype is very broad in the features it incorporates, but offers less depth in its coverage of functionality. A vertical prototype contains as much depth of functionality as possible in the current state of progress, but only for a narrow breadth of features. A horizontal prototype is a good place to start with your prototyping, as it provides an overview on which you can base a top-down approach. A horizontal prototype is effective in demonstrating the product concept and for conveying an early product overview to managers, customers, and users (Kensing & Munk-Madsen, 1993) but, because of the lack of details in depth, horizontal prototypes usually do not support complete workflows, and user experience evaluation with this kind of prototype is generally less realistic. A horizontal prototype can also be used to explore how much functionality will really be used by a certain class of users to expose typical users to the breadth of proposed functionality and get feedback on which functions would be used or not. A vertical prototype allows testing a limited range of features but those functions that are included are evolved in enough detail to support realistic user experience evaluation. Often the functionality of a vertical prototype can include a stub for or an actual working back-end database. A vertical prototype is ideal for times when you need to represent completely the details of an isolated part of an individual interaction workflow in order to understand how those details play out in actual usage. For example, you may wish to study a new design for the checkout part of the workflow for an e-commerce Website. A vertical prototype would show that one task sequence and associated user actions, in depth. 11.2.2 "T" Prototypes A "T" prototype combines the advantages of both horizontal and vertical, offering a good compromise for system evaluation. Much of the interface is realized at a shallow level (the horizontal top of the T), but a few parts are done in depth (the vertical part of the T). This makes a T prototype essentially a Figure 11-2 Horizontal and vertical prototyping concepts, from Nielsen (1993), with permission. 394 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE horizontal prototype, but with the functionality details filled out vertically for some parts of the design. In the early going, the T prototype provides a nice balance between the two extremes, giving you some advantages of each. Once you have established a system overview in your horizontal prototype, as a practical matter the T prototype is the next step toward achieving some depth. In time, the horizontal foundation supports evolving vertical growth across the whole prototype. 11.2.3 Local Prototypes We call the small area where horizontal and vertical slices intersect a "local prototype" because the depth and breadth are both limited to a very localized interaction design issue. A local prototype is used to evaluate design alternatives for particular isolated interaction details, such as the appearance of an icon, wording of a message, or behavior of an individual function. It is so narrow and shallow that it is about just one isolated design issue and it does not support any depth of task flow. A local prototype is the solution for those times when your design team encounters an impasse in design discussions where, after a while, there is no agreement and people are starting to repeat themselves. Contextual data are not clear on the question and further arguing is a waste of time. It is time to put the specific design issue on a list for testing, letting the user or customer speak to it in a kind of "feature face-off" to help decide among the alternatives. For example, your design team might not be able to agree on the details of a "Save" dialogue box and you want to compare two different approaches. So you can mockup the two dialogue box designs and ask for user opinions about how they behave. Local prototypes are used independently from other prototypes and have very short life spans, useful only briefly when specific details of one or two particular design issues are being worked out. If a bit more depth or breadth becomes needed in the process, a local prototype can easily grow into a horizontal, vertical, or T prototype. 11.3 FIDELITY OF PROTOTYPES The level of fidelity of a prototype is another dimension along which prototype content can be controlled. The fidelity of a prototype reflects how "finished" it is perceived to be by customers and users, not how authentic or correct the underlying code is (Tullis, 1990). 395 PROTOTYPING 11.3.1 Low-Fidelity Prototypes Low-fidelity prototypes are, as the term implies, prototypes that are not faithful representations of the details of look, feel, and behavior, but give rather highlevel, more abstract impressions of the intended design. Low-fidelity prototypes are appropriate when design details have not been decided or when they are likely to change and it is a waste of effort and maybe even misleading to try and flesh out the details. Because low-fidelity prototypes are sometimes not taken seriously, the case for low-fidelity prototyping, especially using paper, bears some explaining. In fact, it is perhaps at this lowest end of the fidelity spectrum, paper prototypes, that dwells the highest potential ratio of value in user experience gained per unit of effort expended. A low-fidelity prototype is much less evolved and therefore far less expensive. It can be constructed and iterated in a fraction of the time it takes to produce a good high-fidelity prototype. But can a low-fidelity prototype, a prototype that does not look like the final system, really work? The experience of many has shown that despite the vast difference between a prototype and the finished product, low-fidelity prototypes can be surprisingly effective. Virzi, Sokolov, and Karis (1996) found that people, customers, and users do take paper prototypes seriously and that low-fidelity prototypes do reveal many user experience problems, including the more severe problems. You can get your project team to take them seriously, too. Your team may be reluctant about doing a "kindergarten" activity, but they will see that users and customers love them and that they have discovered a powerful tool for their design projects. But will not the low-fidelity appearance bias users about the perceived user experience? Apparently not, according to Wiklund, Thurrott, and Dumas (1992), who concluded in a study that aesthetic quality (level of finish) did not bias users (positively or negatively) about the prototype's perceived user experience. As long as they understand what you are doing and why, they will go along with it. Sometimes it takes a successful personal experience to overcome a bias against low fidelity. In one of our UX classes, we had an experienced software developer who did not believe in using low-fidelity prototypes. Because it was a requirement in the project for the course, he did use the technique anyway, and it was an eye-opener for him, as this email he sent us a few months later attests: After doing some of the tests I have to concede that paper prototypes are useful. Reviewing screenshots with the customer did not catch some pretty obvious usability problems and now it is hard to modify the computer prototype. Another 396 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE problem is that we did not get as complete a coverage with the screenshots of the system as we thought and had to improvise some functionality pretty quickly. I think someone had told me about that.... Low-fidelity prototyping has long been a well-known design technique and, as Rettig (1994) says, if your organization or project team has not been using low-fidelity prototypes, you are in for a pleasant surprise; it can be a big breakthrough tool for you. 11.3.2 Medium-Fidelity Prototypes Sometimes you need a prototype with a level in between low fidelity and high fidelity. Sometimes you have to choose one level of fidelity to stick with because you do not have time or other resources for your prototype to evolve from low fidelity to high-fidelity. For teams that want a bit more fidelity in their design representations than you can get with paper and want to step up to computerbased representations, medium-fidelity prototypes can be the answer. In Chapter 9, for example, this occurs about when you undertake intermediate design and early detailed design. As a mechanism for mediumfidelity prototypes, wireframes (also in Chapter 9) are an effective way to show layout and the breadth of user interface objects and are fast becoming the most popular approach in many development organizations. 11.3.3 High-Fidelity Prototypes In contrast, high-fidelity prototypes are more detailed representations of designs, including details of appearance and interaction behavior. High-fidelity is required to evaluate design details and it is how the users can see the complete (in the sense of realism) design. High-fidelity prototypes are the vehicle for refining the design details to get them just right as they go into the final implementation. As the term implies, a high-fidelity prototype is faithful to the details, the look, feel, and behavior of an interaction design and possibly even system functionality. A high-fidelity prototype, if and when you can afford the added expense and time to produce it, is still less expensive and faster than programming the final product and will be so much more realistic, more interactive, more responsive, and so much more representative of a real software product than a low-fidelity prototype. High-fidelity prototypes can also be useful as advance sales demos for marketing and even as demos for raising venture capital for the company. 397 PROTOTYPING An extreme case of a high-fidelity prototype is the fully-programmed, wholesystem prototype, discussed soon later, including both interaction design and non-user-interface functionality working together. Whole system prototypes can be as expensive and time-consuming as an implementation of an early version of the system itself and entail a lot of the software engineering management issues of non-prototype system development, including UX and SE collaboration about system functionality and overall design much earlier in the project than usual. Soon after you have a conceptual design mapped out, give it life as a low-fidelity prototype and try out the concept. This is the time to start with a horizontal prototype, showing the possible breadth of features without much depth. The facility of paper prototypes enables you, in a day or two, to create a new design idea, implement it in a prototype, evaluate it with users, and modify it. Low fidelity usually means paper prototypes. You should construct your early paper prototypes as quickly and efficiently as possible. Early versions are just about interaction, not functionality. You do not even have to use "real" widgets. Sometimes a paper prototype can act as a "coding blocker" to prevent time wasted on coding too early. At this critical juncture, when the design is starting to come together, Table 11-2 Summary of comparison of low-fidelity and highfidelity prototypes Type of Prototype "Strength" When in Lifecycle to Apply "Strength" Cost to Fix Appearance Cost to Fix Sequencing Low fidelity (e.g., paper) Flexibility; easy to change sequencing, overall behavior Early Almost none Low High fidelity (e.g., computer) Fidelity of appearance Later Intermediate High Figure 11-3 Depth, breadth, and fidelity considerations in choosing a type of prototype. 407 PROTOTYPING programmers are likely to suffer from the WISCY syndrome (Why Isn't Sam Coding Yet?). They naturally want to run off and start coding. You need a way to keep people from writing code until we can get the design to the point where we should invest in it. Once any code gets written, there will be ownership attached and it will get protected and will stay around longer than it should. Even though it is just a prototype, people will begin to resist making changes to "their baby"; they will be too invested in it. And other team members, knowing that it is getting harder to get changes through, will be less willing to suggest changes. 11.6.1 Paper Prototypes for Design Reviews and Demos Your earliest paper prototypes will have no functionality or interaction, no ability to respond to any user actions. You can demonstrate some predefined sequences of screen sketches as storyboards or "movies" of envisioned interaction. For the earliest design reviews, you just want to show what it looks like and a little of the sequencing behavior. The goal is to see some of the interaction design very quickly---in the time frame of hours, not days or weeks. 11.6.2 Hand-Drawn Paper Prototypes The next level of paper prototypes will support some simulated "interaction." As the user views screens and pretends to click buttons, a human "executor" plays computer and moves paper pieces in response to those mock user actions. 11.6.3 Computer-Printed Paper Prototypes Paper prototypes, with user interface objects and text on paper printed via a computer, are essentially the same as hand-drawn paper prototypes, except slightly higher fidelity in appearance. You get fast, easy, and effective prototypes with added realism at very low cost. To make computer-printable screens for lowfidelity prototypes, you can use tools such as OmniGraffle (for the Mac) or Microsoft Visio. Berger (2006) describes one successful case of using a software tool not intended for prototyping. When used as a prototyping tool, Excel provides grid alignment for objects and text, tabbed pages to contain a library of designs, a hypertext feature used for interactive links, and a built-in primitive database capability. Cells can contain graphical images, which can also be copied and pasted, thus the concept of templates for dialogue box, buttons, and so on can be thought of as native to Excel. Berger claimed fast turnarounds, typically on a daily basis. 408 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE 11.6.4 Is not paper just a stopgap medium? Is not paper prototyping a technique necessary just because we do not yet have good enough software prototyping tools? Yes and no. There is always hope for a future software prototyping tool that can match the fluency and spontaneity afforded by the paper medium. That would be a welcome tool indeed and perhaps wireframing is heading in that direction but, given the current software technology for programming prototypes even for low-fidelity prototypes, there is no comparison with the ease and speed with which paper prototypes can be modified and refined, even if changes are needed on the fly in the midst of an evaluation session. Therefore, at least for the foreseeable future, paper prototyping has to be considered as more than just a stopgap measure or a low-tech substitute for that as yet chimerical software tool; it is a legitimate technology on its own. Paper prototyping is an embodied effort that involves the brain in the creative hand--eye feedback loop. When you use any kind of programming, your brain is diverted from the design to the programming. When you are writing or drawing on the paper with your hands and your eyes and moving sheets of paper around manually, you are thinking about design. When you are programming, you are thinking about the software tool. Rettig (1994) says that with paper, "... interface designers spend 95% of their time thinking about the design and only 5% thinking about the mechanisms of the tool. Software-based tools, no matter how well executed, reverse this ratio." 11.6.5 Why Not Just Program a Low-Fidelity Prototype? At "run-time" (or evaluation time), it is often useful to write on the paper pages, something you cannot do with a programmed prototype. Also, we have found that paper has much broader visual bandwidth, which is a boon when you want to look at and compare multiple screens at once. When it comes time to change the interaction sequencing in a design, it is done faster and visualized more easily by shuffling paper on a table. Another subtle difference is that a paper prototype is always available for "execution," but a software prototype is only intermittently executable---only between sessions of programming to make changes. Between versions, there is a need for fast turnaround to the next version, but the slightest error in the code will disable the prototype completely. Being software, your prototype is susceptible to a single bug that can bring it crashing down and you may be caught in a position where you have to debug in front of your demo audience or users. 409 PROTOTYPING The result of early programmed prototypes is almost always slow prototyping, not useful for evaluating numerous different alternatives while the trail to interaction design evolution is still hot. Fewer iterations are possible, with more "dead time" in between where users and evaluators can lose interest and have correspondingly less opportunity to participate in the design process. Also, of course, as the prototype grows in size, more and more delay is incurred from programming and keeping it executable. Because programmed prototypes are not always immediately available for evaluation and design discussion, sometimes the prototyping activity cannot keep up with the need for fast iteration. Berger (2006) relates an anecdote about a project in which the user interface software developer had the job of implementing design sketches and design changes in a Web page production tool. It took about 2 weeks to convert the designs to active Web pages for the prototype and in the interim the design had already changed again and the beautiful prototypes were useless. 11.6.6 How to Make an Effective Paper Prototype Almost all you ever wanted to know about prototyping, you learned in Kindergarten. Get out your paper and pencil, some duct tape, and WD-40. Decide who on your team can be trusted with sharp instruments, and we are off on another adventure. There are many possible approaches to building paper prototypes. The following are some general guidelines that have worked for us and that we have refined over many, many iterations. Start by setting a realistic deadline. This is one kind of activity that can go on forever. Time management is an important part of any prototyping activity. There is no end to the features, polishing, and tweaking that can be added to a paper prototype. And watch out for design iteration occurring before you even get the first prototype finished. You can go around in circles before you get user inputs and it probably will not add much value to the design. Why polish a feature that might well change within the next day anyway? Gather a set of paper prototyping materials. As you work with paper prototypes, you will gather a set of construction materials customized to your approach. Here is a starter list to get you going: n Blank plastic transparency sheets, 8½ 11; the very inexpensive write-on kind works fine; you do not need the expensive copier-type plastic n An assortment of different colored, fine-pointed, erasable and permanent marking pens 410 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE n A supply of plain copier-type paper (or a pad of plain, unlined, white paper) n Assorted pencils and pens n Scissors n "Scotch" tape (with dispensers) n A bottle of Wite-out or other correction fluid n Rulers or straight edges n A yellow and/or pink highlighter n "Sticky" (e.g., Post-it) note pads in a variety of sizes and colors Keep these in a box so that you have them handy for the next time you need to make a paper prototype. Work fast and do not color within the lines. If they told you in school to use straight lines and color only within the boxes, here is a chance to revolt, a chance to heal your psyche. Change your personality and live dangerously, breaking the bonds of grade school tyranny and dogmatism, something you can gloat about in the usual postprototype cocktail party. Draw on everything you have worked on so far for the design. Use your conceptual design, design scenarios, ideation, personas, storyboards, and everything else you have created in working up to this exciting moment of putting it into the first real materialization of your design ideas. Make an easel to register (align) your screen and user interface object sheets of paper and plastic. Use an "easel" to register each interaction sheet with the others. The simple foam-core board easels we make for our short courses are economical and serviceable. On a piece of foam-core board slightly larger than 8½ 11, on at least two (of the four) adjacent sides add some small pieces of additional foam-core board as "stops," as seen in Figures 11-4 and 11-5, against which each interaction sheet can be pushed to ensure proper positioning. When the prototype is being "executed" during UX evaluation, the easel will usually be taped to the tabletop for stability. Make underlying paper foundation "screens." Start with simplest possible background for each screen in pencil or pen Figure 11-4 Foam-core board paper prototype easel with "stops" to align the interaction sheets. 411 PROTOTYPING on full-size paper (usually 8½ 11) as a base for all moving parts. Include only parts that never change. For example, in a calendar system prototype, show a monthly "grid," leaving a blank space for the month name). See Figure 11-6. Use paper cutouts taped onto full-size plastic "interaction sheets" for all moving parts. Everything else, besides the paper foundation, will be taped to transparent plastic sheets. Draw everything else (e.g., interaction objects, highlights, dialogue boxes, labels) in pencil, pen, or colored markers on smaller pieces of paper and cut them out. Tape them onto separate full-size 8½ 11 blank plastic sheets in the appropriate position aligned relative to objects in the foundation screen and to objects taped to other plastic sheets. We call this full-size plastic sheet, with paper user interface object(s) taped in position, an "interaction sheet." The appearance of a given screen in your prototype is made up of multiple overlays of these interaction sheets. See Figure 11-7. When these interaction sheets are aligned against the stops in the easel, they appear to be part of the user interface, as in the case of the pop-up dialogue box in Figure 11-8. Be creative. Think broadly about how to add useful features to your prototype without too much extra effort. In addition to drawing by hand, you can use simple graphics or paint programs to import images such as buttons, and resize, label, and print them in color. Fasten some objects such as pull-down lists to the top or side of an interaction sheet with transparent tape hinges so that they can "flap down" to overlay the screen when they are selected. See Figure 11-9. Scrolling can be done by cutting slits in your paper menu, which is taped to a plastic sheet. Then a slightly smaller slip Figure 11-5 Another style of "stops" on a foam-core board paper prototype easel. Figure 11-6 Underlying paper foundation "screen." 412 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE of paper with the menu choices can be slid through the slots. See Figure 11-10. Use any creative techniques to demonstrate motion, dynamics, and feedback. Do not write or mark on plastic interaction sheets. The plastic interaction sheets are almost exclusively for mounting and positioning the paper pieces. The plastic is supposed to be transparent; that is how layering works. Do not write or draw on the plastic. The only exception is for making transparent objects such as highlights or as an input medium on which users write input values. Later we will discuss completely blank sheets for writing inputs. Make highlights on plastic with "handles" for holding during prototype execution. Make a highlight to fit each major selectable object. Cut out a plastic square or rectangle with a long handle and color in the highlight (usually just an outline so as not to obscure the object or text being highlighted) with a permanent marking pen. See Figure 11-11. Figure 11-7 Paper cutouts taped to fullsize plastic for moving parts. Figure 11-8 A "Preferences" dialogue box taped to plastic and aligned in easel. 413 PROTOTYPING Make your interaction sheets highly modular by including only a small amount on each one. Instead of customizing a single screen or page, build up each screen or display in layers. The less you put on each layer, the more modular and, therefore, the more reuse you will get. With every feature and every variation of appearance taped to a different sheet of plastic, you have the best chance at being able to show the most variation of appearances and user interface object configurations you might encounter. Be suspicious of a lot of writing/drawing on one interaction sheet. When a prototype user gives an input, it usually makes a change in the display. Each possible change should go on a separate interaction sheet. Get modularity by thinking about whatever needs to appear by itself. When you make an interaction sheet, ask yourself: Will every single detail on here always appear together? If there is a chance two items on the same interaction sheet will ever appear separately, it is best to put them on separate interaction sheets. They come back together when you overlay them together, but they can still be used separately, too. See Figure 11-12. Figure 11-9 Pull-down menu on a tape "hinge." Figure 11-10 Paper sliding through a slit for scrolling. 414 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE Do lots of sketching and storyboarding before making interaction sheets. This will save time and work. Use every stratagem for minimizing work and time. Focus on design, not writing and paper cutting. Reuse at every level. Make it a goal to not draw or write anything twice; use templates for the common parts of similar objects. Use a copy machine or scanner to reproduce common parts of similar interaction objects and write in only the differences. For example, for a calendar, use copies of a blank month template, filling in the days for each month. The idea is to capture in a template everything that does not have to change from one instance to another. Cut corners when it does not hurt things. Always trade off accuracy (when it is not needed) for efficiency (that is always needed). As an example, if it is not important to have the days and dates be exactly right for a given month on a calendar, use the same date numbers for each month in your early prototype. Then you can put the numbers in your month template and not have to write any in. Make the prototype support key tasks. Prototype at least all benchmark tasks from your UX target table, as this prototype will be used in the formative evaluation exercise. Make a "this feature not yet implemented" message. This is the prototype's response to a user action that was not anticipated or that has not yet been included in the design. You will be surprised Figure 11-11 Selection highlight on plastic with a long handle. Figure 11-12 Lots of pieces of dialogue as paper cutouts aligned on plastic sheets. 415 PROTOTYPING how often you may use this in user experience evaluation with early prototypes. See Figure 11-13. Include "decoy" user interface objects. If you include only user interface objects needed to do your initial benchmark tasks, it may be unrealistically easy for users to do just those tasks. Doing user experience testing with this kind of initial interaction design does not give a good idea of the ease of use of the design when it is complete and contains many more userinterface objects to choose from andmany more other choices to make during a task. Therefore, you should include many other "decoy" buttons, menu choices, etc., even if they do not do anything (so participants see more than just the "happy path" for their benchmark tasks). Your decoy objects should look plausible and should, as much as possible, anticipate other tasks and other paths. Users performing tasks with your prototype will be faced with a more realistic array of user interface objects about which they will have to think as they make choices about what user actions are next. And when they click on a decoy object, that is when you get to use your "not implemented" message. (Later, in the evaluation chapters, we while discuss probing the users on why they clicked on that object when it is not part of your envisioned task sequence.) Accommodate data value entry by users. When users need to enter a value (e.g., a credit card number) into a paper prototype, it is usually sufficient to use a clear sheet of plastic (a blank interaction sheet) on top of the layers and let them write the value in with a marking pen; see Figure 11-14. Of course, if your design requires them to enter that number using a touchscreen on an iPad, for example, you have to create a "text input" interaction sheet. Figure 11-13 "Not yet implemented" message. Figure 11-14 Data entry on clear plastic overlay sheet. 416 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE Create a way to manage complex task threads. Before an evaluation session, the prototype "executor" will have all the paper sheets and overlays all lined up and ready to put on the easel in response to user actions. When the number of prototype pieces gets very large, however, it is difficult to know what stack of pieces to use at any point in the interaction, and it is even more difficult to clean it all up after the session to make it ready for the next session. As an organizing technique that works most of the time, we have taken to attaching colored dots to the pieces, color coding them according to task threads. Sheets of adhesive-backed colored circles are available at most office supply stores. See Figure 11-15. Numbers written on the circles indicate the approximate expected order of usage in the corresponding task thread, which is the order to sort them in when cleaning up after a session. Pilot test thoroughly. Before your prototype is ready to be used in real user experience evaluation sessions, you must give it a good shake-down. Pilot test your prototype to be sure that it will support all your benchmark tasks. You do not want to make the rookie mistake of "burning" a user participant (subject) by getting them started only to discover the prototype "blows up" and prevents benchmark task performance. Simulate user experience evaluation conditions by having one member of your team "execute" the prototype while another member plays "user" and tries out all benchmark tasks. The user person should go through each task in as many ways as anyone thinks possible to head off the "oh, we never thought they would try that" syndrome later in testing. Do not assume error-free performance by your users; try to have appropriate error messages where user errors might occur. When you think your prototype is ready, get someone from outside your group and have them play the user role in more pilot testing. Figure 11-15 Adhesive-backed circles for color coding task threads on prototype pieces. Exercise See Exercise 11-1, Building a Low-Fidelity Paper Prototype for Your System 417 PROTOTYPING 11.7 ADVANTAGES OF AND CAUTIONS ABOUT USING PROTOTYPES 11.7.1 Advantages of Prototyping In sum, prototypes have these advantages: n Offer concrete baseline for communication between users and designers n Provide conversational "prop" to support communication of concepts not easily conveyed verbally n Allow users to "take the design for a spin" (who would buy a car without taking it for a test drive or buy a stereo system without first listening to it?) n Give project visibility and buy-in within customer and developer organizations n Encourage early user participation and involvement n Give impression that design is easy to change because a prototype is obviously not finished n Afford designers immediate observation of user performance and consequences of design decisions n Help sell management an idea for new product n Help affect a paradigm shift from existing system to new system Although SE and UX roles can successfully do much of their work independently and in parallel, because of the tight coupling between the backend and the user interface, a successful project requires that the two roles communicate so that each knows generally what the other is doing and how that might affect its own activities and work products. The two roles cannot collaborate without communication, and the longer they work without knowing about the other's progress and insights, the more their work is likely to diverge, and the harder it becomes to bring the two lifecycle products together at the end. Communication is important between SE and UX roles to have activity awareness about how the other group's design is progressing, what process activity they are currently performing, what features are being focused on, what insights and concerns they have for the project, what directions they are taking, and so on. Especially during the early requirements and design activities, each group needs to be "light on its feet" and able to inform and respond to events and activities occurring in the counterpart lifecycle. However, in many organizations, such necessary communication does not take place because the two lifecycles operate independently; that is, there is no structured development framework to facilitate communication between these two lifecycles, leaving cross-domain (especially) communication dependent on individual whim or chance. Based on our experience, ad hoc communication processes have proven to be inadequate and often result in nasty surprises that are revealed only at the end when serious communication finally does occur. This usually happens too late in the overall process. There is a need for a role or a system to ensure that the necessary information is being communicated to all relevant parties in the system development effort. Usually, that roleis a "projectmanager" who keeps track of the overall status of each role, work products, and bottlenecks or constraints. For larger organizations with more complex projects, there is a need for communication systems to automate and help the project manager manage some of these responsibilities. 812 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE 23.4.2 Coordination When the two lifecycle concepts are applied in isolation, the resulting lack of understanding between the two roles, combined with an urgency to get their own work done, often leads to working without collaboration and coordination. This often results in not getting the UX needs of the system represented in the software design. Without coordination, the two roles duplicate their efforts in UX and SE activities when they could be working together. For example, both SE and UX roles conduct separate field visits and client interviews for systems analysis and requirements gathering during the early stages of the project. Without collaboration, each project group reports its results in documentation not usually seen by people in the other lifecycle. Each uses those results to drive only their part of the system design and finally merge at the implementation stage. However, because these specifications were created without coordination and communication, when they are now considered together in detail, developers typically discover that the two design parts do not fit with one another because of large differences and incompatibilities. Moreover, this lack of coordinated activities presents the appearance of a disjointed development team to the client. It is likely to cause confusion in the clients: "why are we being asked similar questions by two different groups from the same development team?" Coordination will help in team building, communication, and in each lifecycle role recognizing the value, and problems, of the other, in addition to early agreement on goals and requirements. In addition, working together on early lifecycle activities is a chance for each role to learn about the value, objectives, and problems of the other. 23.4.3 Synchronization Eventually the two lifecycle roles must synchronize the work products for implementation and testing. However, waiting until one absolutely must synchronize creates problems. Synchronization of the design work products of the two lifecycle roles is usually put off until the implementation and testing phases near the end of the development effort, which creates big surprises that are often too costly to address. For example, it is not uncommon to find UX roles being brought into the project late in the development process, even after the SE implementation stage (scenario 1 above). They are asked to test and/or "fix" the usability of an already implemented system, and then, of course, many changes proposed by the UX 813 CONNECTIONS WITH SOFTWARE ENGINEERING roles that require significant modifications must be ignored due to budget and time constraints. Those few changes that actually do get included require a significant investment in terms of time and effort because they must be retrofitted (Boehm, 1981). Therefore, it is better to have many synchronization points, earlier and throughout the two project lifecycles. These timely synchronization points would allow earlier, more frequent, and less costly "calibration" to keep both design parts on track for a more harmonious final synchronization with fewer harmful surprises. The idea is for each role to have timely readiness of work products when the other project role needs them. This prevents situations where one project role must wait for the other one to complete a particular work product. However, the more each team works without communication and collaboration, the less likely they will be able to schedule their project activities to arrive simultaneously at common checkpoints. 23.4.4 Dependency and Constraint Enforcement Because each part of an interactive system must operate with the other, many system requirements have both SE and UX components. If an SE component or feature is first to be considered, the SE role should inform the UX role that an interaction design counterpart is needed, and vice versa. When the two roles gather requirements separately and without communication, it is easy to capture requirements that are conflicting, incompatible, or one-sided. Even if there is some ad hoc form of communication between the two groups, it is inevitable that some parts of the requirements or design will be forgotten or will "fall through the cracks." The lack of understanding of the constraints and dependencies between the two lifecycles' timelines and work products often create serious problems, such as inconsistencies in the work products of the SE and UX design. As an example, software engineers perform a detailed functional analysis from the requirements of the system to be built. Interaction designers perform a hierarchical task analysis, with usage scenarios to guide design for each task, based on their requirements. These requirements and designs are maintained separately and not necessarily shared. However, each view of the requirements and design has elements that reflect constraints or dependencies in elements of the counterpart view. For example, each task in the task analysis on the UX side implies the need for corresponding functions in the SE specifications. Similarly, each function in the software design may reflect the need for access to this functionality through one 814 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE or more user tasks in the user interface. Without the knowledge of such dependencies, when tasks are missing in the user interface or functions are missing in the software because of changes on either lifecycle, the respective sets of designs have a high probability of becoming inconsistent. In our experience, we often encounter situations that illustrate the fact that design choices made in one lifecycle constrain the design options in the other. For example, we see situations where user interfaces to software systems were designed from a functional point of view and the code was factored to minimize duplication on the backend core. The resulting systems had user interfaces that did not have proper interaction cues to help the user in a smooth task transition. Instead, a task-oriented approach would have supported users with screen transitions specific to each task, even though this would have resulted in a possibly "less efficient" composition for the backend. Another case in our experience was about integrating a group of individually designed Web-based systems through a single portal. Each of these systems was designed for separate tasks and functionalities. These systems were integrated on the basis of functionality and not on the way the tasks would flow in the new system. The users of this new system had to go through awkward screen transitions when their tasks referenced functions from the different existing systems. Constraints, dependencies, and relationships exist not only among activities and work products that cross over between the two lifecycles, but they also exist within each of the lifecycles. For example, on the UX side, a key task identified in task analysis should be considered and matched later for a design scenario and a benchmark task. "We Cannot Change THAT!": Usability and Software Architecture Len Bass, NICTA, Sydney, Australia Bonnie E. John, IBM T. J. Watson Research Center and Carnegie Mellon University Usability analyses or user test data are in; the development team is poised to respond. The software had been modularized carefully so that modifications to the user interfaces (UI) would be fast and easy. When the usability problems are presented, someone around the table exclaims, "Oh, no, we cannot change THAT!" The requested modification or feature reaches too far into the architecture of the system to allow economically viable and timely changes to be made. Even when the functionality is right, even when the UI is separated from that functionality, architectural decisions made early in development have precluded the 815 CONNECTIONS WITH SOFTWARE ENGINEERING implementation of a usable system. Members of the design team are frustrated and disappointed that despite their best efforts, despite following current best practice, they must ship a product that is far less usable than they know it could be. This scenario need not be played out if important usability concerns are considered during the earliest design decisions of a system, that is, during design of the software architecture. Software architecture refers to the internal structure of the software---what pieces are going to make up the system and how they will interact. The relationships between architectural decisions and software quality attributes such as performance, availability, security, and modifiability are relatively well understood and taught routinely in software architecture courses. However, the prevailing wisdom in the last 25 years has been that usability had no architectural role except through modifiability; design the UI to be modified easily and usability will be realized through iterative design, analysis, and testing. Software engineers developed "separation patterns" or generalized architecture designs that separated the user interface into components that could change independently from the core application functionality. The Model--View--Controller (MVC) pattern, http://en.wikipedia.org/wiki/Model--view--controller, is an example of one of these. Separation of the user interface has been quite effective and is used commonly in practice, but it has problems: (1) there are many aspects of usability that require architectural support other than separation and (2) the later changes are made to the system, the more expensive they are to achieve. Forcing usability to be achieved through modification means that time and budget pressures are likely to cut off iterations on the user interface and result in a system that is not as usable as possible. Consider, for example, giving the user the ability to cancel a long-running command. In order for the user to cancel a command, the system must first recognize that the particular operation will indeed be long enough that the user might want to cancel (as opposed to waiting for it to complete and then undo). Second, the system must display a dialogue box giving the user the ability to cancel. Third, the system must recognize when the user selects the "cancel" button regardless of what else it is doing and respond quickly (or the user will keep hitting the cancel button). Next, the system must terminate the active operation and, finally, the system must restore the system to its state prior to the issuance of that command (having stored all the necessary information prior to the invocation of the command), informing the user if it fails to restore any of the state. In order for cancel to be supported, aspects of the MVC must all cooperate in a systematic fashion. Early software architecture design will determine how difficult it is to implement this coordination. Difficulty translates into time and cost, which, in turn, reduce the likelihood that the cancel command will be implemented. Cancel is one of two dozen or so usability operations that we have identified as having a significant impact on the usability of a system. These architecturally significant usability scenarios include undo, aggregating data, and allowing the user to personalize their view. For a more complete list of these operations, see Bass and John (2003). After identifying the architecturally significant usability scenarios important for the end users of a system, the developers---software engineers---must know how to design the architecture and implement the command and all of the subtleties involved in delivering a usable product. For the most part, this information is not taught in standard computer science courses today. Consequently, most software developers will learn this only through painful experience. To help this situation, we have developed usability-supporting architectural patterns embodied in a checklist describing responsibilities of the software that architecture designers and developers should consider when implementing these operations (Adams et al., 2005; Golden, 2010). However, only some usability scenarios have been embodied in responsibility checklists and knowledge of the existence of these checklists among practicing developers is very limited. Organizations that have used these materials, however, have found them valuable. NASA used our usabilitysupporting architectural patterns in the design of the Mars Exploration Rover Board (MERBoard), a wall-sized collaborative workspace intended to facilitate shoulder-to-shoulder collaboration by MER science teams. During a redesign of the MERBoard software architecture, 17 architecturally significant usability scenarios were identified as essential for MERBoard and a majority of the architecture's components were modified in response to the issues raised by the usability-supporting architectural patterns (Adams et al., 2005). ABB considered usability-supporting architectural patterns in the design of a new product line architecture, finding 14 issues with their initial design and crediting this process with a 17:1 return on investment of their architect's time---1-day's work by two people saved 5 weeks of work later (Stoll et al., 2009). For more information, see the Usability and Software Architecture Website at http://www.cs.cmu.edu/\~bej/usa/index.html. References Adams, R. J., Bass, L., & John, B. E. (2005). Applying general usability scenarios to the design of the software architecture of a collaborative workspace. In A. Seffah, J. Gulliksen & M. Desmarais (Eds.), Human-Centered Software Engineering: Frameworks for HCI/HCD and Software Engineering Integration. Kluwer Academic Publishers. Bass, L., & John, B. E. (2003). Linking usability to software architecture patterns through general scenarios. Journal of Systems and Software, 66(3), 187--197. Golden, E. (2010). Early-Stage Software Design for Usability. Ph.D. dissertation in Human-Computer Interaction: HumanComputer Interaction Institute, School of Computer Science, Carnegie Mellon University. Stoll, P., Bass, L., Golden, E., & John, B. E. (2009). Supporting usability in product line architectures. In Proceedings of the 13th International Software Product Line Conference, San Francisco, CA August 24--28, 2009. 23.4.5 Anticipating Change within the Overall Project Effort In the development of interactive systems, each phase and each iteration have a potential for change. In fact, at least the early part of the UX process is intended to change the design iteratively. This change can manifest itself during the requirements phase (growing and evolving understanding of the emerging system by project team members and users), design stage (evaluation identifies that the interaction metaphor was not easily understood by users), and so on. Such changes often affect both lifecycles because of the various dependencies that exist between and within the two processes. Therefore, change can be visualized conceptually as a design perturbation that has a ripple effect on all stages in which previous work has been done. For example, during the UX evaluation, the UX role may recognize the need for a new task to be supported by the system. This new task requires updating the previously generated hierarchical task inventory (HTI) document and generation of new usage scenarios to reflect the new addition (along with the rationale). On the SE side, this change to the HTI generates the need to change the functional decomposition (for example, by adding new functions to the functional core to support this task on the user interface). These new functions, in turn, mandate a change to the design, schedules, and, in some cases, even the architecture of the entire system. Thus, one of the most important requirements for system development is to identify the possible implications and effects of each kind of change and to account for them in the design accordingly. One particular kind of dependency between lifecycle parts represents a kind of "feed forward," giving insight to future lifecycle activities. For example, during the early design stages in the UX lifecycle, usage scenarios provide insights as to how the layout and design of the user interface might look like. In other words, for project activities that are connected to one another (in this case, the initial screen design is dependent on or connected to the usage scenarios), there is a possibility that the designers can forecast or derive insights from a particular design activity. Sometimes the feed-forward is in the form of a note: "when you get to screen design, do not forget to consider such and such." Therefore, when the project team member encounters such premonitions or ideas about potential effects on later stages (on the screen design in this example), there is a need to document them when the process is still in the initial stages (usage scenario phase). When the team member reaches the initial screen design stage, previously documented insights are then readily available to aid the screen design activity. 23.5 THE CHALLENGE OF CONNECTING SE AND UX 23.5.1 User Interaction Design, Software, and Implementation In Figure 23-2 we show software design and implementation for just UI software (middle and bottom boxes). While this separation of UI software from non-user-interface (functional core) software is an acceptable abstraction, it is actually an oversimplification. The current state of the art in software engineering embodies a well-developed lifecycle concept and Figure 23-2 User interaction design as input to UI software design. 818 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE well-developed process for producing requirements and design specifications for the whole software system. But they do not have a process for developing UI software separately from the functional (non-UI) software. Furthermore, there are currently no major software development lifecycle concepts that adequately support including the UX lifecycle as a serious part of the overall system development process. Most software engineering textbooks (Pressman, 2009; Sommerville, 2006) just mention the UI design without saying anything about how it happens. Most software engineering courses in colleges and universities describe a software development lifecycle without any reference to the UI. Students are taught about the different stages of developing interactive software and, as they finish the implementation stages in the lifecycle, the UI somehow appears automagically. Important questions about how the UI is designed, by whom, and how the two corresponding SE and UX lifecycles are connected are barely mentioned (Pyla et al., 2004). So, in practice, most software requirements specifications include little about the interaction design. If they do get input from UX people, they include use cases and screen sketches as part of their requirements, or they might sketch these up themselves, but that is about the extent of it. However, in reality there is a need for UX people to produce interaction design specifications and for SE people to make a connection with them in their lifecycle. And this is best done in the long run within a broader, connected lifecycle model embracing both lifecycle processes and facilitating communication across and within both development domains. 23.5.2 The Promise of Agile Development In Chapter 19, we attempted such an integrated model in an agile development context. Even though traditional agile methods (such as XP) do not explicitly mention UX processes, we believe that the underlying philosophy of these methodologies to be flexible, ready for change, and evaluation-centered has the potential to bridge the gap between SE and UX if they are extended to include UI components and techniques. As we mentioned in Chapter 19, this requires compromises and adjustments on both sides to respect the core tenets of each lifecycle. 23.5.3 The Pipedream of Completely Separate Lifecycles Although we have separated out the UX lifecycle for discussion in most of this book for the convenience of not having to worry too much about the SE counterpart, we realize that because the two worlds of development cannot exist in isolation, we do try to face our connection to the SE world in this chapter. 819 CONNECTIONS WITH SOFTWARE ENGINEERING 23.5.4 How about Lifecycles in Series? Consider the make-believe scenario, very similar to the one discussed earlier, in which timing means nothing and SE people sit around waiting for a complete and final interaction design to be ready. Then a series connection of the two lifecycles, as shown in Figure 23-3, might work. The UX people work until they achieved a stable interaction design and have decided (by whatever criterion) to stop iterating. Then they hand off that finished version of the interaction design and agree that it will not be changed by further iteration in this version of the system. The output of the UX lifecycle used as input to the SE lifecycle is labeled "interaction design specifications as UI software requirements inputs" to emphasize that the interaction design specifications are not yet software Figure 23-3 UX and SE lifecycles in series. 820 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE requirements but inputs to requirements because only SE people can make software requirements and those requirements are for the entire system. We, the HCI folks, provide the inputs to only part of that requirements process. There are, of course, at least two things very wrong about the assumptions behind this series connection of lifecycles. First, and most obvious, the timing just will not work. The absolute lack of parallelism leads to terrible inefficiencies, wasted time, and an unduly long overall product lifecycle. Once the project is started, the SE people could and would, in fact, work in parallel on compiling their own software requirements, deferring interaction design requirements in anticipation of those to come from the UX people. However, if they must wait until the UX people have gotten through their entire iterative lifecycle, they will not get the interaction design specifications to use in specifying UI software requirements until far into the project schedule. The second fatal flaw of the series lifecycle connection is that the SE side cannot accommodate UI changes that inevitably will occur after the interaction design "handoff." There is never a time this early in the overall process when the UX people can declare their interaction design as "done." UX people are constantly iterating and, even after the last usability testing session, design changes continue to occur for many reasons, for example, platform constraints do not allow certain UI features. 23.5.5 Can We Make an Iterative Version of the Serial Connection? To get information about the evolving interaction design to SE people earlier and to accommodate changes due to iteration, perhaps we can change the configuration in Figure 23-3 slightly so that each iteration of the interaction design, instead of just the final interaction design, also goes through the software lifecycle; see Figure 23-4. While this would help alleviate the timing problem by keeping SE people informed much earlier of what is going on in the UX cycle, it could be confusing and frustrating to have the UX requirements inputs changing so often. Each UX iteration feeds an SE iteration, but the existing SE lifecycle concepts are not equipped for iteration this finely grained; they cannot afford to keep starting over with changing requirements. 23.5.6 It Needs to Be More Collaborative and Parallel So variations of a series lifecycle connection are fraught with practical challenges. We need parallelism between these two lifecycles. As shown in Figure 23-5, there is a need for something in-between to anchor this parallelism. 821 CONNECTIONS WITH SOFTWARE ENGINEERING As we mentioned earlier, however, this parallel configuration has the strongest need for collaboration and coordination, represented by the connecting box with the question mark in Figure 23-5. Without such communication parallel lifecycles cannot work. However, traditional SE and UX lifecycles do not have mechanisms for that kind of communication. So in the interest of a realistic UX/SE development collaboration without undue Figure 23-4 Iterating a serial connection. Figure 23-5 Need for connections between the two lifecycles. 822 THE UX BOOK: PROCESS AND GUIDELINES FO R ENSURING A QUALITY USER EXPERIENCE timing constraints, we propose some kind of parallel lifecycle connection, with a communication layer in-between, such as that of Figure 23-6. Conceptually, the two lifecycles are used to develop two views of the same overall system. Therefore, the different activities within these two lifecycles have deep relationships among them. Consequently, it is important that the two development roles communicate after each major activity to ensure that they share the insights from their counterpart lifecycle and to maintain situational awareness about their progress. The box in the middle of Figure 23-6 is a mechanism for communication, collaboration, constraint checking, and change management discussed earlier. This communication mechanism allows (or forces) the two development domains to keep each other informed about activities, work products, and (especially) design changes. Each stage of each lifecycle engages in work product flow and communication potentially with each stage of the other lifecycle but the connection is not one to one between the corresponding stages. Because SE people face many changes to their own requirements, change is certainly not a foreign concept to them, either. It is all about how you handle change. In an ideal world, SE people can just plug in the new interaction design, change the requirements that are affected, and move forward. In the practical world, they need situational awareness from constant feedback from UX people to prepare SE people to answer two important questions: Can our current design accommodate the existing UX inputs? Second, based on the trajectory of UX design evolution, can we foresee any major problems? Having the two lifecycles parallel has the advantage that it retains the two lifecycles as independent, thereby protecting their individual and inherent interests, foci, emphases, and philosophies. It also ensures that the influence and the expertise of each lifecycle are felt throughout the entire process, not just during the early parts of development. Figure 23-6 More parallel connections between the two lifecycles. 823 CONNECTIONS WITH SOFTWARE ENGINEERING This is especially important for the UX lifecycle because, if the interaction design were to be handed over to the SE role early on, any changes necessary due to constraints arising later in the process will be decided by the SE role alone without consultation with the UX role and without understanding of the original design rationale. Moreover, having the UX role as part of the overall team during the later parts of the development allows for catching any misrepresentations or misinterpretations of UI specifications by the SE role. 23.5.7 Risk Management through Communication, Collaboration, Constraint Checking, and Change Management Taking a risk management perspective, the box in the middle of Figure 23-6 allows each lifecycle to say to the other "show me your risks" so that they can anticipate the impact on their own risks and allocate resources accordingly. Identifying and understanding risks are legitimate arguments for getting project resources as an investment in reducing overall project risks. If a challenge is encountered in a stage of one lifecycle, it can create a large overall risk for parallel but non-communicating lifecycles because of a lack of timely awareness of the problem in the other lifecycle. Such risks are minimal in a series configuration, but that is unrealistic for other reasons. For example, a problem that stretches the timeline on the UX side can eventually skew the timeline on the SE side. In Figure 23-6, the risk can be better contained because early awareness affords a more agile response in addressing it. In cases where the UX design is not compatible with the SE implementation constraints, Figure 23-3 represents a very high risk because neither group is aware of the incompatibility until late in the game. Figure 23-4 represents only a medium risk because the feedback loop can help developers catch major problems. Figure 23-6, however, will minimize risk by fostering earlier communication throughout the two lifecycles; risks are more distributed.

Use Quizgecko on...
Browser
Browser