All About Website Usability Blog – Holly Phillips


Google Website Optimizer vs other multivariate tools
August 27, 2009, 4:27 am
Filed under: multivariate testing

Ok, my last post waxed poetic over GWO.  For our purposes, it was the perfect tool and let us do everything we needed.  But there are many other multivariate tools and vendors out there, and in some cases those would be a better choice than GWO.

We ran into few limitations with GWO.  The main one was its inability to track success events separately.  If you have an ad where you’d be equally happy if someone clicked either “Send me a quote” or “Contact me”, you need to identify both of those as success events and GWO tracks them as a whole.  This wasn’t a problem for us, because we have a separate site metrics package that lets us split out the success events if we need to.  But it may be a limiting factor for other companies or purposes.

Another minor limitation is the inability to specify how much traffic is sent to each variation (GWO evenly splits the traffic between all variants).  This became especially disappointing as the test moved on and it became clear that the current design was under-performing.  It would have been nice to be able to send only 10% of the traffic there and the remaining 90% to the other variants.  But, if we really wanted to do this we could have devised a pretty simple work-around.

Tim Ash in his excellent book Landing Page Optimization goes into more detail on the pros and cons of GWO.  (The book also is a great primer on multivariate testing in general and is definitely worth a look.)  For basic A/B or multivariate testing, or if you’re just starting out in this area, I still recommend you give GWO a try.

Advertisements


Simple yet increcibly effective A/B testing with Google Website Optimizer
August 20, 2009, 4:25 am
Filed under: multivariate testing

We just completed an incredibly successful A/B test using Google’s free Website Optimizer tool (yes, they do full multivariate testing too).  I HIGHLY recommend it! 

The tool is simple to use, and Google provides an impressive number of help documents, examples, and FAQs.  And did I mention it’s FREE?  Of course, you still need to create your page variations which may require external resources, but having a no-cost tool to test the variations is great.

We tested an existing P4P ad and four variations of it.  After just two weeks we had a clear winner that was getting over three times the clickthru rate of the original.  We were able to easily turn off the low-performing variations at any time, and could have just as easily followed up with a new test of newly-iterated designs if we wanted (in this case we were happy with the results and pressed for time, so simply implemented the winning design from that single round of testing and moved on).

GWO has easy-to-follow instructions on how to tag the pages, provides the code to add to the pages, and has a very simple UI for tracking results.  They do all the significance testing behind-the-scenes and only declare a winner once the statistics support it.

There are other multivariate testing tools out there, and the for-pay ones certainly have more bells and whistles than GWO.  But for basic testing like this GWO works great.  This will definitely be a standard tool we use more often.  It’s not often we can achieve 3.5x improvement for free 🙂



The power of Customer-Centered-Design
August 13, 2009, 4:18 am
Filed under: customer-centered-design, usability testing

No matter how well you know your customers, there’s always insight to be gained by watching them actually try to interact with your designs.  Using the iterative Customer-Centered-Design (CCD) approach lets you rapidly test different designs with customers, find out what works and what doesn’t, and quickly iterate to a final successful design.

The basic approach is as follows:

  1. Create several different design mockups for the pages/functions you want to test.  These can be as crude or polished as you like, but should all have the same degree of “finish”.  (They should also at least have some basic amount of visual design — don’t test a black-and-white sketch and believe it will accurately represent what the customer will ultimately see).  We try to make sure that at least one or two of these early design concepts are radical departures from what we normally do.  This keeps our designs fresh and helps us continually push innovation.
  2. Have 5-10 customers try to do typicaly tasks on each of these mockups.  We usually do this in one-on-one phone/webex sessions, where we have the design team listening to the interview while a moderator takes the customer through tasks and probes.
  3. Consolidate the feedback and update the mockups.  Usually one or two of the original design directions emerges as a winner at this point, so we focus the new designs around those winning approaches.
  4. Repeat steps 2 and 3 a few more times, iterating on the design each time to get closer and closer to a final successful design.  Depending on the complexity of what we’re designing, we’ll do 3-6 design/testing iterations.

Note that this doesn’t have to take a long time.  We’ve done up to two iterative rounds in a week:  finalize mockups monday, test tuesday, incorporate findings into new designs wednesday, retest thursday, conclude friday. 

If you follow this approach, you’ll end up with a design that the entire team is highly confident will work well with customers.  You can then follow it up with pre-release and post-release task-based testing to get the quantitative metrics to prove it.



Testing visual design separately from layout and content
August 6, 2009, 4:57 am
Filed under: usability testing, visual design

Just say no!  It used to be a very widespread practice in usability testing to test hand-drawn sketches with customers, keeping things like color, font, and spacing out of the mockups so as to not “distract” the users.  Many design companies still advocate this approach today.  But this approach assumes that color, font, and spacing have no impact on the usability of a layout.  That’s been proven over and over again to be false.

Luke Wroblewski at Functioning Form has a great blog with example after example of how adding visual design elements can make or break a design.  And we’ve seen this in our own work.  A page filled with black text and a black-on-white call-to-action button failed to get customers to click on the button, but the exact same page with black text and a white-on-red button increased the clickthru dramatically.  Or a homepage with blue tabs proved totally unusable (nobody saw the tabs), while the same homepage with yellow tabs was suddenly very successful.

The next time a design firm suggests testing black-and-white design sketches to test for usability, just say no.  What you learn will have very little relation to the usability of the final full-color version.