I’ve been studying usability evaluation methods (UEMs), which although not directly related to taxonomy work, are relevant for anyone involved in information architecture (IA). I was surprised at how controversial a subject usability is, having assumed that everyone wants their sites to be as usable as possible. However, assessing usability does involve a lot of judgement calls and tradeoffs, which is one reason why some people seem to take against it.
You have to decide who you are going to focus your usability testing on, perhaps choosing a “core user group” rather than trying to please everybody. You have to decide what aspects of usability you are going to focus on - for example accessibility (everybody should be following minimum W3C standards anyway), but you might legitimately decide that you are not going to worry about making your site easy for children to read (e.g. if it is a postgraduate discussion forum). Then you need to decide if you are going to try to make individual tasks as efficient as possible (e.g. not using as many keystrokes) or look at the site as a whole (e.g. a social networking site might place a higher value on being fun and funky over being efficient to use).
Once you have decided who your target users are and what aspect of usability you are most interested in, you can choose a testing method. There seem to be over 100 different methods out there, ranging from fairly straightforward ones like Jakob Nielsen’s Heuristic Evaluation - which gives you a checklist of things to look at, and even “expert inspection” where you just look at the site to try to find potential problems. These methods assume you know quite a lot about what makes a site usable or not.
You could do an experiment, where you set up a task or scenario and measure people’s performance at it. This is often described as laboratory testing, but you can have a “lab” that is just you, a notebook, and a computer for your participants. This sort of test is great if you have one specific function (e.g. an ecommerce function) and you want to check that people can follow the steps easily.
The methods I liked the most were the more abstract conceptual methods, like CASSM, where you try to get a picture of users’ expectations and then compare them with the website to see where there are gaps or conflicts.
Interestingly, the literature shows that for all methods there is a marked “evaluator effect“, with different evaluators getting different results even when using an identical process. I think this is because there is so much interpretation at all stages. The closest you’d get to a “scientific” set of original data would be to set up a carefully controlled usability lab test, but even then translating the results into redesign suggestions is really an art, not a science.
It is true that there are tradeoffs and quite a lot of art rather than science in usability evaluation, but I think there is a moral (not to mention legal in the UK - not sure about elsewhere) imperative to at least try to be inclusive and in most cases it is simply poor marketing to shut out or make life difficult for potential customers.