Search This Blog

Friday, January 9, 2009

Faceted Browsing and Taxonomy

This entry is a work in progress intended as a tentative study of current use of faceted (guided) navigation in e-commerce settings and how it exposes underlying taxonomy to the user. This blog entry is NOT a critique of the sites discussed here but rather an exploration of navigation paths, taxonomy facilitated browsing and assumptions made regarding the inherent use of underlying information architecture and its impact on clarity and usability (directly impacting conversion and retention rates).

1. Lowe's.com [Captured January 2009]
There are several ways to navigate the site by browsing. The left column provides groupings that parallel the top horizontal menu. In the left column the user see items grouped by Departments and below that, items grouped by Rooms. Departments maps to the store or correspond to a mental model a user might have of the store, and Rooms map to a home or correspond to a mental model the user might have of a home. Providing multiple browsing models is a nice feature because it supports self identification - the user benefits from a flexibility to be at their comfort level, not the site's.
One can assume that the user is more familiar with the concept of a home, so it is a pity that the navigation the user is more comfortable with is secondary to the one the store's model. On the other hand, some users may also be very familiar with the store model. For example - store clerks or customer service reps. (But in my experience the in-store terminals are generally not similar to a company's public e-commerce store). In any case, the site exposes and organizes the highest level of its taxonomy and access to its products in two ways, which increases the flexibility and and probability the user will select one path to work with and not abandon the site.
The number of items under the Rooms model is obviously much shorter than those under Departments and more over, the Laundry Room is one of the items listed right there on this first level. But, browsing top down path 1 (See image below) is the first the user will encounter, clicking the Appliance link under 'Departments'.
>>> Assumption: The user would know/guess that a dryer is an appliance.
Path 1:
L1.1 - Departments
L1.2 - Appliances (click to get to L2)



If the user is more inquisitive and visually scrolls down to the Rooms section, the obvious, explicit selection is right there. (See image
Path 2:
l1.3 - Rooms
l1.4 - Laundry Room (click to get to L2)



Both browse path involve 2 clicks, so no efficiency is gained in terms of physical effort.

Monday, January 5, 2009

Talking Taxonomy To Kids

Everyone knows that you need to use simple words when talking to little kids, so 'big' words like classification, clustering, facets and hierarchy (a distasteful term which even some grownups find difficult to spell) are out. So let's start with a tree. Imagine a tree. What's a tree really? You could say that a tree is like a 'parent', and it has 'children': A trunk that splits into several big branches that in turn split into smaller twigs that split into even smaller twigs where leaves sprout and fruit grow. If the child is really curious, you can talk about the parts of the tree that only moles could see if moles could see: The taproot which is the main root that grows vertically into the ground, the lateral roots that parallels the branches, the radicles which is just a word for small roots that parallel twigs, and the root hair zone which is like the leaves. But lets not make things complicated, because we are talking to a child who happens to speak English. We'd have to use words like stamm (trunk), zweig (branch) and zweig again for twig because it seems that the Germans don't have a special word for it, or at least that's what you get from online translators.

So. is taxonomy like a tree? wait, wait... because there is another way to describe a tree: The bole is the part of the tree between the ground and the first branch, the crown which is the part of the tree from the first branch to the top, and the top which is the highest part of the tree.

And...a tree is a plant and there are all kinds of trees and here are just a few: Redwood, Ash, Fir, Spruce, Sequoia. There are banana trees, apple trees, orange trees and how exactly is a Sequoia related to and Avocado tree? And even more: A collection of trees can form a forest, grove, garden, or park which by themselves are not just collections of trees but wider concepts. So of course, the development of a taxonomy involves research, and among other things, one can find that for many domains, especially in life sciences, law, government and many others, it is possible to start the process with an existing taxonomy, see for example the taxonomywarehouse.

It is clear that the scope of concepts in the world is endless due to the Human instinct to stereotype and classify, first explored by Aristotle, or at least - this is our first written record of an arrangement to classes, subclasses and so on, as means to understand our world. But if so, lets keep in mind that children must have an inherent understanding of classification, and by extenuation, of taxonomy. It is only a matter of vocabulary then.

Taxonomy is a communication device, a tricky one because it is important to make sure that the person that communicates the taxonomy and the audience for the taxonomy understand each other. So the first thing is to understand - who will be using your taxonomy and how.

When you talk to a child, you want to talk about the tree using words like brunch, twigs, leaves etc. and you don't want to discuss apical dominance, foliage and phloem because this is not the vocabulary of an eight years old. And since we clearly can apply elementary school education to user experience design, I would add that developing a taxonomy is art as much as it is science, and for that you can read more in Bowker and Star's great book 'Sorting Things Out'.

As a communication device, taxonomy's principal use is in navigation systems and facilitating good search results. But because a taxonomy maps to a known mental model shared by the user and the system, it is important that the appropriate taxonomy will be exposed to the user in navigation systems, drop-lists, and other actionable interface objects. Such a system allows the user to self-identify: I'm a kid, I'm a teacher or I'm a parent/guardian, and the system renders relevant taxonomies based on appropriate synonym mapping. The multidimensionality of relations within a taxonomic plane is supported by explicit content tagging as well as folksonomy - a taxonomy set by users, to provides the necessary flexibility.

Thursday, January 1, 2009

Towards A Unified UI Testing Model

This entry was inspired by a post by Avinash Kaushik 'Experiment or Die. Five Reasons And Awesome Testing Ideas'.

Although 'User' is the operative word in 'User Interface', it took several decades to get usability off the ground as a service companies are willing to pay for. It is true that some companies pioneered user centered design years ago, but I think it is safe to say that the 'main street' of companies involved in any substantial software project considered (and many still do) the user interface mere eye candy. But the evidence for an evolution is the accepted legitimacy of roles such information and user experience architects, usability engineers, interface designers and so on.

As a result of the higher awareness to the UI throughout a software's life cycle, testing the UI during development is now increasingly common as the tools needed to conduct reasonable testing are more affordable, and testing goals are more practical. Consumer facing interface re/design projects are increasingly adding usability testing as part of the pre-launch process and there is certainly a shift from pseudo-scientific testing of eye-movement tracking or user response time to on-screen events, to measures of task flow efficiency and task completion success.

Usability testing software such as Morae and UserVue substantially reduce the expense and limitations of UI testing that were common just a few years ago when usability labs had to be be rented by the hour, and were extremely expensive. In early 2006 I was handed sixteen audio-cassettes of ninety minutes each after finishing a couple of days in a usability lab. The client spent over $10K for the testing and yet the budget did not allow for video taping and there was no time nor budget allocation to go over the audio tapes after the sessions. While we learned a lot form the sessions, and $10K were a drop in the bucket for a multi-million dollar project, the singularity of such an exercise turned it into an expensive line item that was difficult to sell to many clients, whose budget for UI work was limited to begin with.
The truth is that the technology was just not there in terms of computing powers for real time audio and video capture possible now, and best practices were thin, since performing lab tests was a rare occasion for most practitioners. But the big drawback, in my mind, involved limited demographic and geographic distribution of the test participant due to their need to be in a relative close proximity to the testing facility. Today, with web based testing, we re no longer limited to a physical location and are able to sample a spread that is an accurate reflection of an application's user audience. Methodologies and best practices for UI testing are evolving rapidly, and acceptance of this effort is so high such that it is no longer questioned, as long as the cost is reasonable. UI testing prior to (and maybe during) development makes all the sense.

What I often find is a reality in which organizations contract UI design services - especially interaction design and information architects. As a result, navigation systems, page layouts and behavior patterns of landing pages are set during the concept development phase. Companies will pay for some iterations of user validations, but there is always a real budget pressure to release asap and cut costs. I have yet to see a project plan that seriously accounts for sufficient exploration and testing, and have to fight for it time and again. It is not that clients don’t see the value, but they don’t want to pay unless the concept is seriously off target.

To be realistic and practical - It takes significant time and labor (=$$$) to determine and preserve patterns of consistent interaction and visual design approach and the variations possible. The efforts can be significantly bigger when you are dealing with a multi-national presence where one needs to account for many stakeholders as well as contrasting cultural sensibilities. It is very rare to have such luxury and moreover, critics may argue that the best evolution of the redesigned UI will take place in deployment, not in the 'lab'.
And so, in many cases, the UI design consulting firm leaves around deployment time after handoff to the internal development team and this is where the brand new UI begins to fall apart - there is no one internally with the skill-set, time, budget to take charge of testing the evolving interface as it is being readied for deployment. I doubt that The style guides and UI specs are used much; the cynical phrase 'No one reads' is no far off reality partially because specs are difficult to produce and hard to consume. But that is another story.

As it turns out the UI often gets tested again once in production. This is especially true for commercial B2B and B2C RIAs. However, this round of testing and decisions about modifications to the UI are often done outside of the context of usability, often, without the involvement of the UI team that architected it (due to the fact that often, the consultants who were hired to develop the applications UI are not retained after the launch. In fact - the people who do this round of testing often know very little about UI practice OR even look at the UI they test and attempt to improve.

Usability testing:
  • The testing is performed by usability professionals, part of a concentrated, focused UI effort.
  • The testing is typically qualitative because the sample of participants is relatively small.
  • The testing is typically done on low fidelity clickable prototype, a semi-functional POC, or for redesign purposes on the deployed software.
  • The testing validates the design concept and triggers stakeholders' sign-off, or guides improvements to the existing or redesigned software.
Web Analytics Conversion testing:
  • The testing is performed by web analytics professionals and the effort is typically not related to a UI effort: The testing is not really focused on the user interface from a usability perspective, but from an optimization perspective.
  • The testing is quantitative, based on actual web analytics data derived from deployment usage.
  • The tested user interface is the production UI.
Analytics testing takes time - time to plan the testing strategy, prepare it, but most of all, time to execute and wait to see if trends are changing. We can not assume that the change will take place overnight. Is there a way to attribute time factor to the success or failure of a tested approach? Was it a single element that has contributed to the change, or is it the combination? or is it the latent impact of the brand, of market drivers, reduction of costs and so on.
During development, usability testing is iterative, fast and qualitative. Often this is where testing ends for many organizations, they stop using the consultants and move to analytics testing that is performed by web analytics consultant, or, it is likely that they will have someone in house. Analytics testing is on-going, quantitative, and can be like stabbing in the dark - trying to figure Why without tying it to usability.

Clearly a gap in the interaction design discourse when it comes to web analytics (and testing for optimization). Analytics is regarded as a ‘post’ event, not as something you can be proactive about during the design process. What I hope to see is more dialog between the user experience community and the web analytics community around practical ways to integrate testing and develop a full life cycle approach that combines usability and analytic s considerations throughout. More to come.