Measuring customer satisfaction: Benchmarking Usability in a Complex Product Environment

Usability Benchmarking is a cost-effective solution to steer and evaluate the development process. Together with ELO Digital Office we collected questions and approaches.

Like most companies our partner ELO Digital Office regularly wonders which areas of their extensive portfolio would profit most from the limited User Experience resources. It speaks for itself that new products and features are intensively taken care of by user-centric processes. At the same time however it is necessary, given a changing environment, new insights and practical requirements, to also improve the existing products.

The challenge for the development of a user-friendly product is to reach homogeneity throughout the user perceived quality of the product. It is rather harmful when certain parts of a product are seen as especially good, while these get overshadowed by the poor quality of other areas.

Following Herzberg a mandatory feature can only prevent dissatisfaction. Merely unanticipated product features increase satisfaction. But only when all base requirements are completely met.

Thus, together wit ELO Digital Office, we developed a concept for a product internal usability benchmarking. The idea is obvious: We regularly measure the quality of different areas of the product portfolio and can thus identify where users will profit most from investing user experience resources in. This approach, however, raises some questions:

How often should the benchmarking be conducted?

As the benchmarking helps to determine in which areas usability resources should be invested it makes sense to do the investigation once per release. ELO Digital Office provides about two new versions of their products per year. A benchmarking test should hence also be conducted twice per year.

It is important to do this rather at the end of the run-time of a product cycle, so users get sufficient time to thoroughly use the current version. Especially for software used in a business environment it is important to not only assess the usability aspect of learnability like it is typically done in the usability laboratory, but to evaluate the usefulness in daily productive use.

How can the quality be meassuered?

The goal of the benchmarking is the relative comparison of different parts of the products with each other. A direct comparison using a matrix question can be used for that. The trigger should target the overall satisfaction. For instance: ‘How do you like the current state of product XY and its different parts?’. A scale from ‘excellent’ to ‘needs improvement’ can be used to assess the answers. This way users state their personal ‘pleasing profile’.

Which areas should be compared to each other?

In the case that the tasks are completed on different workplaces and perhaps even by different users, they have to be evaluated individually. Based on this the segmentation of the product into areas can follow one of two strategies:

  1. Division by dialogues or other elements that structure the interface
  2. Differentiation by the main tasks that are carried out by the program. In the products of ELO Digital Office these could for instance be ‘scan’, ‘archive’, ‘tag’ or ‘search documents’.

Not always applicable, but definitely preferable is the second strategy, since it aligns with the real tasks of the users and can hence be rated quite easily by them. The first strategy in contrast has the disadvantage that the real tasks of the users often embrace the use of different dialogues and these might differ in the quality of use depending on the different tasks. Often, however, only the first strategy can be realized, seeing the enormous number of different user tasks.

How can the survey be conducted?

We suggest to conduct the surveys web-based, e. g. using the free tool UserWeave. This lowers the barrier for participation and there is no need for a direct back-channel to send messages from within the product to the manufacturer. If there are no concerns about such a back-channel, the feedback can be integrated directly into the product. To raise the acceptance of such an integrated solution, users and administrators should be well informed which data will be transmitted for which purpose.

What can you do with the results?

The benchmarking clearly indicates, which areas of the software are problematic. It does not provide any clues, however, what exactly the problems is or how to fix it. Here the classical spectrum of user experience design has to be applied to clearly identify the problems and design solutions.

Alternatively when using a questionnaire, additional questions for a detailed analysis can be incorporated, e. g. based on the criteria formulated in the ISO 9241. Doing so, one has to take care the questionnaire does not get too long. Otherwise the response rate and the willingness to participate in future benchmarking studies will decline.

Based on a regular reiteration of the survey it is possible to evaluate the effect of the past product improvements. This is a fundamental precondition for the future strategic direction.

What does it cost?

This methodology of benchmarking is much cheaper than others. As soon as the survey and the channel to recruit the users is built up, the benchmarking can be repeated every release with little effort. It gets more complex, esp. when free answers are included in the survey and hence also have to be evaluated.

How long does it take?

For reliable results fewer answers are needed than often thought. Depending on the number of different user groups that are analyzed individually and the homogeneity of the application, a few dozen answers are sufficient. From experience these numbers can be reached within a few days.

Where can I see some examples?

We are doing this kind of benchmarking for the project Tine 2.0 for some years now. The results a regularly documented in our Blog; additionally all raw data is available on the Open Usability Plattform UserWeave.