Search This Blog

Thursday, January 1, 2009

Towards A Unified UI Testing Model

This entry was inspired by a post by Avinash Kaushik 'Experiment or Die. Five Reasons And Awesome Testing Ideas'.

Although 'User' is the operative word in 'User Interface', it took several decades to get usability off the ground as a service companies are willing to pay for. It is true that some companies pioneered user centered design years ago, but I think it is safe to say that the 'main street' of companies involved in any substantial software project considered (and many still do) the user interface mere eye candy. But the evidence for an evolution is the accepted legitimacy of roles such information and user experience architects, usability engineers, interface designers and so on.

As a result of the higher awareness to the UI throughout a software's life cycle, testing the UI during development is now increasingly common as the tools needed to conduct reasonable testing are more affordable, and testing goals are more practical. Consumer facing interface re/design projects are increasingly adding usability testing as part of the pre-launch process and there is certainly a shift from pseudo-scientific testing of eye-movement tracking or user response time to on-screen events, to measures of task flow efficiency and task completion success.

Usability testing software such as Morae and UserVue substantially reduce the expense and limitations of UI testing that were common just a few years ago when usability labs had to be be rented by the hour, and were extremely expensive. In early 2006 I was handed sixteen audio-cassettes of ninety minutes each after finishing a couple of days in a usability lab. The client spent over $10K for the testing and yet the budget did not allow for video taping and there was no time nor budget allocation to go over the audio tapes after the sessions. While we learned a lot form the sessions, and $10K were a drop in the bucket for a multi-million dollar project, the singularity of such an exercise turned it into an expensive line item that was difficult to sell to many clients, whose budget for UI work was limited to begin with.
The truth is that the technology was just not there in terms of computing powers for real time audio and video capture possible now, and best practices were thin, since performing lab tests was a rare occasion for most practitioners. But the big drawback, in my mind, involved limited demographic and geographic distribution of the test participant due to their need to be in a relative close proximity to the testing facility. Today, with web based testing, we re no longer limited to a physical location and are able to sample a spread that is an accurate reflection of an application's user audience. Methodologies and best practices for UI testing are evolving rapidly, and acceptance of this effort is so high such that it is no longer questioned, as long as the cost is reasonable. UI testing prior to (and maybe during) development makes all the sense.

What I often find is a reality in which organizations contract UI design services - especially interaction design and information architects. As a result, navigation systems, page layouts and behavior patterns of landing pages are set during the concept development phase. Companies will pay for some iterations of user validations, but there is always a real budget pressure to release asap and cut costs. I have yet to see a project plan that seriously accounts for sufficient exploration and testing, and have to fight for it time and again. It is not that clients don’t see the value, but they don’t want to pay unless the concept is seriously off target.

To be realistic and practical - It takes significant time and labor (=$$$) to determine and preserve patterns of consistent interaction and visual design approach and the variations possible. The efforts can be significantly bigger when you are dealing with a multi-national presence where one needs to account for many stakeholders as well as contrasting cultural sensibilities. It is very rare to have such luxury and moreover, critics may argue that the best evolution of the redesigned UI will take place in deployment, not in the 'lab'.
And so, in many cases, the UI design consulting firm leaves around deployment time after handoff to the internal development team and this is where the brand new UI begins to fall apart - there is no one internally with the skill-set, time, budget to take charge of testing the evolving interface as it is being readied for deployment. I doubt that The style guides and UI specs are used much; the cynical phrase 'No one reads' is no far off reality partially because specs are difficult to produce and hard to consume. But that is another story.

As it turns out the UI often gets tested again once in production. This is especially true for commercial B2B and B2C RIAs. However, this round of testing and decisions about modifications to the UI are often done outside of the context of usability, often, without the involvement of the UI team that architected it (due to the fact that often, the consultants who were hired to develop the applications UI are not retained after the launch. In fact - the people who do this round of testing often know very little about UI practice OR even look at the UI they test and attempt to improve.

Usability testing:
  • The testing is performed by usability professionals, part of a concentrated, focused UI effort.
  • The testing is typically qualitative because the sample of participants is relatively small.
  • The testing is typically done on low fidelity clickable prototype, a semi-functional POC, or for redesign purposes on the deployed software.
  • The testing validates the design concept and triggers stakeholders' sign-off, or guides improvements to the existing or redesigned software.
Web Analytics Conversion testing:
  • The testing is performed by web analytics professionals and the effort is typically not related to a UI effort: The testing is not really focused on the user interface from a usability perspective, but from an optimization perspective.
  • The testing is quantitative, based on actual web analytics data derived from deployment usage.
  • The tested user interface is the production UI.
Analytics testing takes time - time to plan the testing strategy, prepare it, but most of all, time to execute and wait to see if trends are changing. We can not assume that the change will take place overnight. Is there a way to attribute time factor to the success or failure of a tested approach? Was it a single element that has contributed to the change, or is it the combination? or is it the latent impact of the brand, of market drivers, reduction of costs and so on.
During development, usability testing is iterative, fast and qualitative. Often this is where testing ends for many organizations, they stop using the consultants and move to analytics testing that is performed by web analytics consultant, or, it is likely that they will have someone in house. Analytics testing is on-going, quantitative, and can be like stabbing in the dark - trying to figure Why without tying it to usability.

Clearly a gap in the interaction design discourse when it comes to web analytics (and testing for optimization). Analytics is regarded as a ‘post’ event, not as something you can be proactive about during the design process. What I hope to see is more dialog between the user experience community and the web analytics community around practical ways to integrate testing and develop a full life cycle approach that combines usability and analytic s considerations throughout. More to come.

No comments: