Marketing through usability

One of the best forms of marketing the technology of your library is word of mouth. One of the best ways to get word of mouth marketing is to provide "usable" products and services. There are many different articulations for usability including ergonomics, human computer interaction, and user centered design. This column describes usability and how it relates to the future of library service.

What is usability

Believe it or not there is a set of international (ISO) standards on usability defining it as "the extent to which a product can be used by specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specified context of use." [1] The standard elaborates with the definitions for "effectiveness", "efficiency" and "satisfaction". Effectiveness is the extent to which a goal, or task, is achieved. Efficiency is the amount of effort required to accomplish a goal. Satisfaction is the level of comfort that the user feels when using a product and how acceptable the product is to users as vehicle for achieving their goals. Put into my own words, a product or service is highly usable if the product or service can be used as a tool to accomplish a set of defined tasks easily and with a minimum of frustration. Ideally, frustration should not exist at all. Instead, the tool should instill sense of accomplishment upon the completion of the tasks.

A great deal of books, articles, websites, conferences, and consultants all exist in the name of usability. It has gone under the rubric of ergonomics, user centered design (UCD), and in the realm of computers it has been called human-computer interaction (HCI). Throughout its development, usability has always been associated with the interaction between a user and a product/service. Understanding usability is a combination of understanding the user's needs, desires, and ability combined with the goals, functions, limitations of the product or service. To understand the user, usability must take into account things like their experience, domain knowledge, cultural background, disabilities, as well as age and gender. To understand the product or service, usability addresses the product or service's ability to be learned, experimented with, and even re-used after periods of non-use. It addresses the limitations of the product and service itself as well as the limitations of experienced user of the product or service.

There is a growing emphasis for products and services to be usable, or "intuitive". (Personally, I think the word "intuitive", used in this context, is a misnomer. What the programmers and marketers really mean is "operates much like other systems you've used".) This is true because technology, specifically computer technology, is becoming more and more a part of our everyday lives. Just like automobiles, most people don't want to "get under the hood" of a computer. Instead they want to use the computer to accomplish particular goals. As one person put it, people don't want to learn a new hobby (ie. auto mechanics or computer programming), they want fulfill a task.

We all provide information services, and many of these services now-a-days are mediated through computers. Therefore you need to make sure the tools used to facilitate these services are as usable as possible. A usable service will reduce the time you spend describing how to use the service and provide the patron with more time for analysis and synthesis. Consequently, it will reduce your costs as well as your patron's. In turn, this will improve the patron's perception of the library, and you will have time to explore ways to improve other library services.

Testing for usable systems in libraries

If you want to measure the usability of your library systems, then you first need to create a list of goals or tasks the system is suppose to allow users of the system to accomplish. Goals are broad objectives broken down into smaller, discrete tasks. For example, a goal might be for the user to print a list of book titles from your catalog pertaining to a particular subject. Tasks beneath this objective might include: 1) articulating an information need, 2) locating the interface to the catalog from your website, 3) issuing author, title, subject, as well as keyword search strategies, 4) evaluating the results, 5) marking items for processing, and finally 6) printing the marked items.

Similarly, the process of retrieving an article from your collection may include a host of tasks: 1) identifying the source of the article (ie. a journal title), 2) knowing to search the catalog for journal title holdings information, 3) searching the catalog for the title, 4) interpreting the holdings information, 5) identifying the digital or physical location of the journal, 6) issuing an interlibrary loan request (set of tasks in itself), or 7) going to the stacks to finally read and/or photocopy the text.

Looking at the situation this way, librarians expect a whole lot from their patrons!

After identifying one or more sets of goals and/or tasks, you then need people, a user group, for testing purposes. The characteristics of your user group can tip the outcomes of the usability study. People with physical impairments of some sort may encounter one set of problems, infrequent users of your system will have other sets of problems, lesser educated people will have yet another set of problems, and last but not least, people of differing cultural backgrounds will uncover further difficulties. It almost seems hopeless. At the same time, understanding all the differences in your user populations will force you to articulate policies and tailor your services accordingly.

Testing is the next phase of the process. Like any social science, there are numerous ways to examine and evaluate behavior. Usability studies tend to rely on combinations of surveys, interviews, and direct user observation. The surveys and interviews are employed for the purposes of gathering expectations and satisfaction levels. User groups are usually then given sets of goals to accomplish and access to the tools being tested. The users are then asked to try to accomplish the goals while talking aloud. Their actions are usually recorded manually or with the help of video cameras. It is important to remember that the users are not the subject of the experiments; the subject of the experiments are tools used to accomplish the goals, and in this case, the tools are your information services.

The next step is to evaluate the results of the test and make recommendations accordingly. Evaluation of the surveys goes through the typical statistical analysis of other survey tools. The results of these evaluations can be used a benchmarks and compared to the results of the interviews. Do they match? Have people been consistent in their attitudes towards your systems? When there are consistencies, then those consistencies may very well represent true descriptions of your system or the people's feelings towards them. If there are inconsistencies, then something is amiss, and it may very well be your testing instruments.

Direct user observation pays a bigger role in the evaluation process. Testers are encourages to think out loud so their thoughts can be recorded and questions can be answered. These sorts of interactions can quite illuminating. I know since I've been there. Audio and video recordings of the interaction between tester and product/service are expensive and more complicated to set up, but they allow you as evaluate to play and more importantly, re-play the tester/system interactions. You will be looking for the tester taking wrong turns, experimenting or not experimenting with functions, and marking the time it takes to accomplish the test's goals.

Through these process you will get an understanding where your system is more and less usable. Based on your system's weak point you can then make recommendations for improvement. Improvements often times come in the form of removing jargon, facilitating easy navigation in and around systems with a minimum of mouse clicks, and providing undo as well as "wizard" functions.

Implementing the recommendations may prove challenging. Creating a wish list of possible improvements is easier said than done. Furthermore, the people who have the skills creating these systems are more attune to pure function and less in tune with the human elements of systems. "This car is really fast and performs well, but I don't want to drive it because its so uncomfortable, and besides that, its the wrong color." You will have to swallow your pride when implementing the recommendations of your usability test.

A difficulty with usability

I am new to usability, but after dabbling with its principles and applying them to a few of the systems I build, I now understand that usability is much more than creating an aesthetically pleasing interface. Aesthetics as well as functionality, user satisfaction, and ease-of-use all play critical roles in usable systems.

At the same time, there are aspects of gathering, analyzing, and synthesizing information into knowledge that can't very well be automated into a tool. Articulating an information need is one of them. Evaluating the quality of information is another. Discovering similarities and differences between sets of information is yet a third. All of these things require thinking, and thinking is something computers don't do. Even if we had the most usable of all systems, there will still be parts of library work that will be challenging or frustrating to users, depending on their perspective. Overcoming these challenges and eliminating these frustrations are what public service in libraries is all about. In the meantime, create usable systems and they will market themselves. People will increasingly expect them in future library services.


  1. Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability (ISO 9241-11:1998)

Creator: Eric Lease Morgan <>
Source: This is a pre-edited version of a column published in Computers in Libraries.
Date created: 1999-07-15
Date updated: 2004-11-14
Subject(s): marketing; usability;