The basis of a UI should enable a user (specified by the ‘U’ in UI) to interact with a computer or similar device in a fashion which enables the user to control and assess the state of the system. This, these days on most computers and media players etc., is done through an interface (the ‘I’ part in UI). While it doesn’t have to be easy and coherent for a human to understand, having it understandable and useful is generally an advantage. What I mean is going out of your way to make it more difficult and less understandable to use is not generally a good thing!
Then again, usability is a very relative thing. In the same way that speaking Catalan to a person fluent in English (with no knowledge of Catalan by the way) means nothing, the usability of a product primarily aimed at English speakers could mean absolutely nothing to someone speaking Catalan. The same applies within a UI, not all end users of a product are going to be the same, not everyone thinks alike (great minds do apparently though!). A world renowned professor in gene sequencing technology is much more likely to understand a gene sequencing machine than an accountant. In this case though, the differences between the two people (assuming they are from roughly the same background and understand the same language) are as a result of what they’ve learnt or done with their lives – the accountant could learn how to operate the gene sequencing machine if he/she wanted. Yes, it would require some effort most likely, but compare that with the previous example, involving language boundaries. In this case, the difference is much more inherent. In order to work the gene sequencing machine, a native Catalan speaker would have to firstly understand English, and then learn how to operate the gene sequencing machine, a considerably steeper learning curve, one which most likely would not be worthwhile just to learn how to use a gene sequencing machine. The point that arises here is thus, there are two such differences in levels of not-understanding. These are an inherent difference related to language or huge cultural boundaries, and a learned difference, one that is a difference only because of occupation or lifestyle.
So in creating a successful UI (in successful I mean universally understandable), another way is needed to communicate the status of the system or the meanings of any control surfaces to the user. This is commonly accomplished with icons. Icons, essentially small pictures (or graphical entities), should allow interfaces to become fairly universal. Good icons should satisfy two main criteria, a) They shouldn’t be language specific (the French shouldn’t have one image for a drawing tool and the German’s another in order for them to both understand the concept) and b) They should be immediately obvious and recognisable (a pen should signify some kind of drawing tool, a scissors some sort of cutting tool…).
Though icons are infinitely usable in the right scenarios, there are some things that cannot be replaced easily by pictures. A menu bar on a computer OS generally requires some form of text in it to understand, as does a user manual (though with a sufficiently understandable UI, the user manual should only need to explain non-explicitly obvious things). For example, looking at the menu bar in an application, the File, Edit, View etc. items really need to be in text so to speak. In my opinion, using icons in place would decrease the usability of that feature as there would be too many icons to understand each ones’ meaning. How would you represent the menu item ‘Fix Broken Text’? It would be like a game of Pictionary. What about the menu item ‘Update Field’? You could have a picture of a field and something which represents updating, but what happens in a language other than English, the word for field (as in a farmer’s field) may not be the same as a computer field. In these scenarios, a degree of localization would be required, which is understandable.
To be continued…