All About Website Usability Blog – Holly Phillips

The coming evolution of usability, part 1
February 15, 2010, 4:13 am
Filed under: customer-centered-design, visual design

A change is coming over the usability field, and it promises to help move websites into a whole new realm of usefulness.  This change is a natural result of the evolution of design for usability. 

In the early days, the focus was on making websites usable:  making pages scannable, ensuring links conveyed the right scent and navigation was clear, making processes clear and straightforward, etc.  In essence, it was all about removing frustration and obstacles to using a site.

But now that we’ve grown as an industry and most sites follow at least basic usability rules, we’ve come to realize that this is not enough.  A user may be able to easily complete his task, but if it’s a hum-drum boring experience he’s likely to be merely satisfied and not happy, delighted, or eager to return.

I’m starting to see glimmers of this realization all over the place:

  • Stephen Anderson calls it “seductive interactions”
  • RJ Owen calls it “the differences between usability and user experience”
  • Forrester signals it by including “enjoyability” as one of the primary drivers of satisfaction
  • We see it in our own research that shows that traditional elements of usability account for only 60% of a visitor’s satisfaction with the experience

The question now is:  how do we advance from a site that’s perfectly usable to one that’s engaging and fun?  The answer to that is really the marriage of interaction design, visual design, and visitor engagement.  And it promises to open the door to a world of new possibilities.

(to be continued in next week’s blog)

Trends in Search (from Peter Morville’s UIE webinar)
January 13, 2010, 4:53 am
Filed under: customer-centered-design, Search

Peter Morville and Mark Burrell just gave a webinar in one of Jared Spool’s UIE webinars. Topic: Search and Discovery Patterns.

The premise is that good Search is critical to a website, and using pre-existing design patterns can really help Search be successful. Nothing new in that premise — anyone who uses the web knows how critical Search is, and how frustrating it is when it doesn’t work as expected. But they did give some good examples of new uses of Search. Here are some of the highlights:

  • “Search is iterative and interactive, what we find changes what we seek” – Interesting concept, and helps explain why people’s expectations change as new search methodologies come onto the scene.
  • “Browsing does not scale” – meaning that at some point, listing the navigation topics becomes unwieldy. Not sure I totally agree with this. If done well, you can index a pretty deep site with browsing navigation in a very usable way. (And, as a colleague points out, some might say that “Search does not scale” at some point — especially if you have a wide variety of types of search results.)
  • ” ‘Best first’ is one of the primary search patterns, and is the key to Google’s success” – Can’t argue with that! His point is that the algorithms to determine relevance are extermemly important, and including “social data” can help immensely. (Social data, or social search, involves paying attention to what other searchers think are successful results for a particular search and then using that information in the relevance algorithm.)
  • “Faceted navigation lets people begin the way they normally do, by entering a search term. But then it gives users a custom map for their search term, and gives them a simple next step” – This is key – so many websites have pages that just dump the customer onto them and have no clear next step. Examples given are Yelp, NCSU Libraries, Land’s End, Buzzillions, Amazon. One of the key aspects of Faceted Navigation is that it blurs the line between search and browse – in Land’s End, for example, you can browse down into the site but still see a faceted navigation display on the left side of each page.
  • “We’re finding ways to take the search interaction beyond just search” – example is Songza, which gives search results on the same page as allowing you to actually play the songs.
  • “We must keep questioning how we define search, how we define the problem” – This was, in my mind, the best part of the presentation. They showed several examples of non-traditional uses of Search. For example: Maybe the box is really a place to ask questions, and we should strive to return answers and not results (wolfram). Or maybe search is about helping people to make better decisions (hunch). Or maybe it’s all about understanding and interactive visual results (oakland crimespotting). Or finding similar images (gazopa). Or searching by singing (iPhone music search). “In the future of search, it’s critical that we consider the user experience across channels” – For example, with the iPhone, the search must work on the phone itself, in the iTunes app, and in the iTunes store. Search needs to allow users to move fluidly across those platforms.
  • “Getting search right requires a microscope, a telescope, and a kaleidascope” – A microscope to really dig into the details, understand the search logs, and ensure individual searches are relevant. A telescope to see the big picture and how search fits with navigation, other channels, and trends on the web. And a kaleidascope to see things differently and see how search is a part of many different things.
  • “Search is a hybrid between design, engineering, and marketing. It’s a project and a process, and the problem is never solved” – This is a great quote and a great reminder that the job is never done. Providing great search results requires constant ongoing review of search logs, external trends, refinement using social data, etc.

My next post will summarize the second half of this webinar, led by Mark Burrell of Endeca.

Clarity Trumps Persuasion – always!
January 5, 2010, 4:23 am
Filed under: customer-centered-design, Landing Page design, usability basics

I just attended a great webinar by Marketing Experiments called “Clarity Trumps Persuasion”.  If you’re not familiar with them, Marketing Experiments is a company that specializes in optimizing website landing pages, but the principles they tout are equally applicable to normal web pages.  Their main point:  poorly-designed pages that present visitors with competing objectives end up confusing cusotmers and damaging conversion rates.

A great quote from Flint McGlaughlin:  “The chief enemy to forward momentum is confusion.”  If you don’t have a clear next action on the page, you’re “bleeding revenue”. 

I’ve written earlier about our simple A/B test with a landing page where we applied some of these basic principles and improved our conversion rate by 370% (which directly translates into a 370% increase in ROI, by the way).  But we should all remember that these same principles apply to non-landing pages as well.  Yes, typical site pages may have to serve many purposes (for example, a product page need to serve those who want to find out about the product before purchase, buy the product, and service or support the product after purchase.)  That’s how we often justify having many, many links on a page like this and expecting the customer to figure out what he wants to do.  But that’s in fact the easiest way to confuse and lose the visitor. 

If, instead, those pages had very clear next steps and helped walk the cusomer down the right path, they’d be MUCH more effective.  “Clarity trumps persuasion”.  Indeed – clear pages with clear next steps will always improve customer thruput and conversion rates.

Takeaways from Patric Hofmann’s “Icons & Images” presentation
January 5, 2010, 4:05 am
Filed under: confidence, customer-centered-design, scent, usability basics, visual design

I just attended a webnar on “Icons & Images” by Patrick Hofmann.  Key takeaways:

When designing an icon, strive for:

  • Simplicity – simplify the design to just the key elements that convey the message.  For example, iPhone uses only 4 buttons in an icon of a calculator, instead of showing an entire calculator face.  Use silhouettes or outlines where possible
  • Distinction – make sure the icons are clearly distinct from other icons used on that same page; use color, contrast, size , and shape to help differentiate.  Again, iPhone does a good job at this:

  • Standardization – use common icons that people already understand (eg envelope for mail; clockface for clock).  The American Society for Graphic Artists and iStock Photos are good places to look for common icons
  • Words – If needed, use one or two words (no more) in conjunction with an icon.  Some users are more text-based than visual-based so words will help, but only if the icons aren’t clear on their own, and if the addition of text won’t add clutter.  One example – a square box with the words “TV” inside is much more instantly recognizable as a TV than the more traditional box with rabbit-ears (which doesn’t mean anything to younger people).
  • Understandability across cultures  – For example, many cultures don’t understand the old-fashioned US mailbox or telephone icons; better to use more common stylized versions.  Never use hand symbols in icons!  They’re bound to be offensive in at least one country or culture (most likely a Mediterranean-bordering nation).  Things like “OK”, happy face, frowny face, etc are much better.  Red circle with a diagonal slash is universally accepted as meaning no or incorrect or prohibited.

And finally, some good sources for icons:


What’s so special about testing radically different designs?
October 24, 2009, 4:58 am
Filed under: customer-centered-design, usability basics, usability testing

I ran across a podcast between Jared Spool and Leah Buley about “getting to good design faster“.  The main point is that too many groups take a single design into testing and work on fine-tuning that design, instead of testing several radically different designs.    I couldn’t agree more, but wonder why this is even an issue. 

Isn’t it common sense to take full advantage of your time in front of customers by trying radically different things?  Doesn’t everybody know that putting all your bets on a single design before getting any customer input at all is setting you up for possibly putting lipstick on a pig?  This, after all,  is the heart of true customer-centered-design:  don’t rely on your intuition to settle on a design, but rather use the power of customer input to help find the best direction.

I’m perplexed at why, according to the podcast, so few companies seem to do that.  Maybe they only have time/money for a single round of customer testing and would rather fine-tune a single design than test several higher-level design concepts.  Or maybe they’re so confident in their initial design that they don’t feel the need to test anything else.  I dont’ know, but I do know that it’s a huge missed opportunity to learn what you don’t know, gain new insight, and broaden your horizons.  I for one will continue to push for always thinking outside of the box and testing new ideas along with your best guess.  You never know when your customers will surprise you.

The power of Customer-Centered-Design
August 13, 2009, 4:18 am
Filed under: customer-centered-design, usability testing

No matter how well you know your customers, there’s always insight to be gained by watching them actually try to interact with your designs.  Using the iterative Customer-Centered-Design (CCD) approach lets you rapidly test different designs with customers, find out what works and what doesn’t, and quickly iterate to a final successful design.

The basic approach is as follows:

  1. Create several different design mockups for the pages/functions you want to test.  These can be as crude or polished as you like, but should all have the same degree of “finish”.  (They should also at least have some basic amount of visual design — don’t test a black-and-white sketch and believe it will accurately represent what the customer will ultimately see).  We try to make sure that at least one or two of these early design concepts are radical departures from what we normally do.  This keeps our designs fresh and helps us continually push innovation.
  2. Have 5-10 customers try to do typicaly tasks on each of these mockups.  We usually do this in one-on-one phone/webex sessions, where we have the design team listening to the interview while a moderator takes the customer through tasks and probes.
  3. Consolidate the feedback and update the mockups.  Usually one or two of the original design directions emerges as a winner at this point, so we focus the new designs around those winning approaches.
  4. Repeat steps 2 and 3 a few more times, iterating on the design each time to get closer and closer to a final successful design.  Depending on the complexity of what we’re designing, we’ll do 3-6 design/testing iterations.

Note that this doesn’t have to take a long time.  We’ve done up to two iterative rounds in a week:  finalize mockups monday, test tuesday, incorporate findings into new designs wednesday, retest thursday, conclude friday. 

If you follow this approach, you’ll end up with a design that the entire team is highly confident will work well with customers.  You can then follow it up with pre-release and post-release task-based testing to get the quantitative metrics to prove it.

When to do usability testing
June 11, 2009, 4:12 am
Filed under: customer-centered-design, usability testing

Companies usually start doing usability testing by doing what I call “disaster checks”.  They let the designers go off and design as best they can, then bring in some users near the end to make sure they didn’t miss anything.  The problem with this approach is twofold:  (1) it assumes the general design approach is on-target and at most needs some minor tweaks; and (2) it’s too late to react to any but the most minor usability issues found in the testing.

A much more effective and efficient approach is to dfully integrate usability testing throughout the entire design process:

  • Before you start designing:  Perform early testing to determine where the customer needs truly are (do they REALLY need that new capability you want to add, or are they much more interested in having you fix one of the existing capabilities?)
  • During the design phase:  Perform iterative customer-centered-design throughout the entire design process.  Start by testing some radically different design mockups with customers, incorporating their feedback into the designs, repeating, and continuing to do this until you have high confidence in the final design
  • After you release the new design.  Perform before-and-after task-based testing to see how easily customers can do their tasks on the old design, and then how easily they can do them on the newly-released design.  We typically measure satisfaction, success, and time-to-complete, as well as collect customer suggestions and path data showing how exactly they tried to do their tasks
  • As you think about the next design.  Continuing to mine your site metrics can tell you a lot about where people get stuck.  Conducting regular customer satisfaction surveys with the use of the website can give you direct customer words about the problems they have.  And conducting task-based usability studies or interviews, where you watch customers try to do typical tasks (or, better yet, their own tasks) at your site yields invaluable insight.  All of this can be used to help direct where to focus your next design efforts for the biggest customer impact.

So the real answer to “when to do usability testing” is “all the time”.  Once you make it an integral part of your design process, not only will you understand your customers better but your design efforts will be much more well-framed and guided, and you’ll have solid metrics on the impact of your design efforts.