I recently returned from a visit to New York. The numerical street/avenue system seemed to make navigation very easy, but it is important not to make mistakes over the numbers (I almost confused 145th with 45th Street - they are miles apart!) and you always need two numbers to make a grid reference. In London, you need to remember more names, but usually only one number. I would never try to find somewhere in London with just a street name and a number - I always want to know the area. This seems to make navigation harder, but once you have the area right, you won’t be that far away, even if you get the street name and number wrong, whereas the temptation to rely on numbers rather than area names in New York means you have effectively no error-recovery mechanism.
I personally find it easier to remember names than numbers (maybe Americans are better at numbers), and I navigate London by remembering nearest tube station names. I found subway stations in New York trickier, as so many have what to me are not very memorable names - “8th Street” just doesn’t seem to stick in the same way that “Colindale” or “South Acton” do. If I’ve lost the recall, recognition of numbers is also difficult. I might recognise “Colindale” as being the right shape or sound of word, but if I’ve forgotten 8th, being shown it among 6th, 10th, and 12th doesn’t help. So although New York at first seemed far easier to navigate than London, I still felt I had to work quite hard to build a mental map. It would be interesting to know if one system really is more user-friendly, or if you just get used to either in time.
I suspect that I just prefer the London system because that is what I am used to, and I would sitll prefer it, even if it was demonstrably less efficient than the numerical system - a good illusration of why change management is so difficult. Even if you introduce a simpler and more efficient system, people yearn for the old familiar one with all its complexities and peculiarities.
I had a look to see what studies on urban navigation are out there, but instead happened on this rather charming public art project:
Wooster Collective: Urban Flora - A Taxonomy Of The City.
Taxonomy and Glossaries for Enterprise Search Terminology - Enterprise Search Practice Blog has a handy little glossary from indexer and heavy user of controlled vocabularies Lynda Moulton (via Taxonomy Watch).
Having spent years working as an editor fussing over consistency of style and orthography, I shouldn’t have been as surprised as I was to find my tags on even this little blog site, written solely by me, had already become a mess. It didn’t take too long to tidy them up, but there are only a handful of articles here so far.
I worked with some extremely clever people in my first “proper” job back in the 90s, and we used to have a “90%” rule regarding algorithmic-based language processing (we mostly processed very well-structured text). However brilliant your program, you’d always have 10% of nonsense left over at the end that you needed to sort out by hand - mainly due to the vagaries of natural language and general human inconsistency. I’m no expert on natural language processing, but I get the impression that a lot of people still think 90% is really rather good. Certainly auto-classification software seems to run at a much lower success rate, even after manual training. It strikes me that there’s a parallel between folksonomies and this sort of software. Both process a lot of information on cheaply, so make possible processing on a scale that just couldn’t be done before, but you still need someone to tidy up around the edges if you want top quality.
I think the future of folksonomies depends on how this tidying-up process develops. There are various things happening to improve quality - like auto-complete predictive text. Google’s tag game is another approach, and ravelry.com use gentle human “shepherding” of taggers, personally suggesting tags and orthography (thanks to Elizabeth for pointing this one out to me).
I would really like to get hold of some percentages. If 75% is a decent showing for off-the peg auto-categorisation/classification software, and we could get up to 90% with bespoke algorithms processing structured text, what perecentages can you expect from a folksonomic approach?
I’m still mulling over Helen Longino’s criteria for objectivity in scientific enquiry (see previous post: Science as Social Knowledge) and it occurred to me that folksonomies are not really open and democratic, but are actually obscure and impenetrable. The “viewpoint” of any given folksonomy might be an averaged out majority consensus or some other way of aggregating tags might have been used, and so you can’t tell if it is skewed by a numerically small but prolifically tagging group. This is the point Judith Simon made in relation to ratings and review software systems at the ISKO conference, but it seems to me the problem for folksonomies is even worse, because of the echo chamber effect of people amplifying popular tags. Without some way of showing who is tagging what and why, the viewpoint expressed in the folksonomy is a mystery. This is not necessarily the case, but I think you’d need to collect huge amounts of data from every tagger, then database it along with the tags, then run all sorts of analyses and publish them in order to show the background assumptions driving the majority tags.
If the folksonomic tags don’t help you find things, who could you complain to? How do you work out whether it doesn’t help you because you are a minority, or for some other reason? With a taxonomy, the structure is open - you may not like it but you can see what it is - and there will usually be someone “in charge” who you can challenge and criticise if you think your perspective has been overlooked. In many case the process of construction will be known too. I don’t see an obvious way of challenging or criticising a folksonomy in this way, so presumably it fails Longino’s criteria for objectivity.
You can just stick your own tags into a folksonomy and use them yourself so there is some trace of your viewpoint in there, but if the rest of the folksonomy doesn’t help you search, that means you can only find things once you have tagged them yourself, which would presumably rule out large content repositories. So, you have to learn and live with the imposed system - just like with a taxonomy - but it’s never quite clear exactly what that system is.
I can’t help thinking the information world has become very morbid. There was Green Chameleon’s Dead KM Walking debate, CMS Watch’s Taxonomies are dead punt, and now keyword search is dead, according to the Enterprise Search Center (via Taxonomy Watch).
Stephen Arnold says “Established system vendors and newcomers promise silver bullets that will kill the werewolves plaguing enterprise search. Taxonomies resonate in some vendors’ marketing spiels. Others focus on natural language processing… ” This makes taxonomies sound like they are some new fangled techie trick, rather than the traditional sorting out we’re all used to. He then states that users expect “a search system to … Offer a web page that gives users specific suggestions and options with hotlinks to topics, categories, and key subjects … provide the user with point and-click options … Allow the user to drill down or jump across topics.” Are those not taxonomies for navigation?
I thoroughly enjoyed Science as Social Knowledge by the US philosopher Helen Longino. It was recommended to me by Judith Simon, a very smart researcher I met at the ISKO conference in Montreal last summer. She researches trust and social software and suggested that Longino’s analysis of objectivity would be helpful to me. It took me a while to get settled with the book, but I recognised an essentially Wittgensteinian take on the notion of shared meaning. Longino works this into a set of principles for establishing degrees of objectivity in scientific enquiry. If I have grasped it all correctly, she basically says that although there is no such thing as “ideal” objectivity - a one true perspective up in the sky - we do not have to collapse into an “anything goes” relativism. We can accept that background assumptions can be challenged and change, and embed the notion of challenge and criticism into the heart of scientific enquiry itself. That establishes a self-regulating system that is more or less objective, depending on how open it is to criticism and how responsive it is to legitimate challenges. Objectivity arises out of the process of consensus-building in an open, reflective, and self-challenging community.
Applying this to taxonomy work appears to mean that the process of taxonomy building can be more or less objective, depending on how open the process is to the community and to adapting to legitimate challenges or complaints. This seems to be very much like the practical advice offered by taxonomists expressed in terms of “get user buy-in”, “consult all stakeholders”, “ensure that you consider all relevant viewpoints”, or “ensure that you have regular reviews and updates”, so it’s reassuring to know we are basically epistemologically valid in our methods!