All About Website Usability Blog – Holly Phillips


Summary of “It’s Greek to Me: Best Practices for Multi-Lingual Surveys”
October 30, 2009, 4:01 am
Filed under: usability basics, usability testing

Just attended a webinar by Vovici about multi-lingual surveys.  Most of it was common stuff (translate to get higher response rates, better data quality; schedule enough time for professional translations; back-translate for quality checks; remember holiday schedules, etc.) 

But there were a few interesting points:

  • Graphics and icons may be inappropriate or confusing for specific cultures
  • Some scales exaggerate cultural issues – likert scales can be particularly difficult.  Safe choices are constant sum, MaxDiff (most important / least important)
  • Remember to factor in the cost of translation for the invitation, reminders, and any fulfillment messaging if the survey is translated
  • Translators should have excellent understanding of English, the target culture, the industry, and survey research (do NOT in-source the translation or use basic translators without knowledge of surveys)
  • Brief the translators ahead-of-time on the intent of the survey
  • For the translators, create a glossary with the English terms and the traget language  terms for consistency
  • Don’t put survey deadline on Friday since that may be Saturday in some regions

Good beginner’s guide to multi-language surveys, but sure wish they would have addressed some of the really tough issues like  finding good sample sources in under-developed countries or differences in cultural expectations / legal restrictions for online surveys.



What’s so special about testing radically different designs?
October 24, 2009, 4:58 am
Filed under: customer-centered-design, usability basics, usability testing

I ran across a podcast between Jared Spool and Leah Buley about “getting to good design faster“.  The main point is that too many groups take a single design into testing and work on fine-tuning that design, instead of testing several radically different designs.    I couldn’t agree more, but wonder why this is even an issue. 

Isn’t it common sense to take full advantage of your time in front of customers by trying radically different things?  Doesn’t everybody know that putting all your bets on a single design before getting any customer input at all is setting you up for possibly putting lipstick on a pig?  This, after all,  is the heart of true customer-centered-design:  don’t rely on your intuition to settle on a design, but rather use the power of customer input to help find the best direction.

I’m perplexed at why, according to the podcast, so few companies seem to do that.  Maybe they only have time/money for a single round of customer testing and would rather fine-tune a single design than test several higher-level design concepts.  Or maybe they’re so confident in their initial design that they don’t feel the need to test anything else.  I dont’ know, but I do know that it’s a huge missed opportunity to learn what you don’t know, gain new insight, and broaden your horizons.  I for one will continue to push for always thinking outside of the box and testing new ideas along with your best guess.  You never know when your customers will surprise you.



reblog: The Seven Sins of Usability
October 15, 2009, 4:04 am
Filed under: usability basics

Neil Walker, CTO at Just Search, posted a list of 7 sins of usability that’s spot-on.  Here’s a summary for the time-challenged among us:

  1. Inconsistent and confusing site navigation
  2. Difficult to scan content
  3. Misidentified and unidentified links (can we please see the end of the ‘other’, ‘misc’, and, my personal favorite, ‘useful links’ links??)
  4. Too much industry jargon
  5. Hidden or absent contact information
  6. Not allowing user browser control
  7. Content that looks like advertising (my addition:  content that reads like “marketing fluff”)

If you want the full details, see Neil’s post.



In-person vs phone/webex vs online usability studies
October 8, 2009, 4:51 am
Filed under: usability testing

There are several types of usability testing, all with the same goal of better understanding how well sites will work for typical customers. 

  • Traditional (in-person) usability testing consists of recruiting customers to come to a centralized usability lab, where they sit in a room with a pc and possibly a moderator and are observed as they interact with the site.  They are typically videotaped, and usually viewed real-time by observers either in the room or behind a one-way mirror.
     
  • Phone or WebEx usability testing is similar to traditional testing, but the customer stays in his own location and interacts with the site from his own pc.  The moderator is on the phone and viewing the customer’s mouse movements over WebEx or a similar program.  Other listeners may also be monitoring over the phone and WebEx.
     
  • Online usability testing is a relatively new form of testing that allows a customer to interact with the site from his own pc, and uses special software to capture his mouse movements.  He can also be asked virtually any type of research question.  Typically, the questions or instructions appear in a single frame, while the site of interest appears in a separate frame.  The customer is asked to complete a task, and then to answer some questions about the experience.  This can be monitored or not, depending on the needs of the research.

Clearly there are pros and cons to each type of usability testing.  The goal of the testing will determine which type will be most appropriate.  A rough comparison of the costs and benefits of the three types of testing is shown here:

  Traditional Phone/WebEx Online
COST      
  Financial High Medium Low
  Time & resources High Medium Low
BENEFITS      
  Geographic diversity of respondents Low High High
  Replicates customers’ actual environment No Yes Yes
  Ability to test more users Low High High
  Ability to ask follow-up questions High High Medium
  Analysis and turn-around time Medium Medium High

Clearly understanding the goals of your particular project will help you choose the right (most appropriate and effective) type of testing.



Qualitative vs Quantitative Research
October 1, 2009, 3:14 am
Filed under: usability basics, usability testing

Primary market research tools fit into two broad research categories: QUALITATIVE and QUANTITATIVE. The objectives of the research will dictate which type of research is most appropriate. A common approach is to start a project using qualitative research to gain understanding, and following this with quantitative research to quantify or test the results.
Qualitative Research

Typically qualitative research is used when the objective is to:

  • identify
  • explore
  • understand
  • inquire

Qualitative research, in general, involves more personal techniques with a smaller set of respondents. Focus groups, in-depth interviews (in-person or by phone), customer visits, and other similar techniques are typical for conducting qualitative research.

Quantitative Research

On the other hand, quantitative research techniques should be used if the objective is to

  • test
  • estimate
  • determine
  • quantify
  • rank order

Quantitative research, in general, involves interviewing enough respondents to be able to extrapolate the results to the market as a whole. Statistical significance will drive the number of respondents required.

Although the above refers mainly to Market Research, it is also applicable to usability testing.  The goals of your particular usability questions will determine whether qualitative or quantitative research is the best choice.