Intermediate results of the icon tests: Nitrux

With a series of icon tests we currently study effects on the usability of icon design. This article however does not focus on these general design effects but presents findings specific to the Nitrux icon set.

Introduction

The introduction of the new Breeze icon set in KDE let us again wonder, what aspects of an icon set actually takes what impact on the usability of it. We investigated Oxygen and Tango Icons for the LibreOffice project before, but our focus then was on checking all icons of the standard tool bar. This time we focus on different icon sets and will use 13 common actions to compare them.

With this series we are going to test at least 10 different free icon sets: Breeze, Oxygen, Tango, Faenza, Nuvola, Nitrux, Elementary, Crystal Project, Humanity and Treepata. These icon sets differ on various aspects: use of color and details, flat or not and sometimes even on the metaphor used.

So, we generally want to analyze effects of icon design on the overall performance of an icon set. Statistics on this issue can obviously only be done after all icon sets have been tested. But with every test, we win some specific insights in strengths and weaknesses of each icon set tested.

In this post we share some findings about the Nitrux icon set (many thanks to Uri Herrara from Nitrux S.A. for supporting this study).

The study was finished by 566 participants (drop-out rate 7%) with an average handling time of 3:27 min.

Results of Nitrux icons

Table 1 lists the aggregated quality indicators. They show how well all icons that we used for the test were suited to symbolize the different terms. It has a range from 1 (no fit) to 10 (perfect fit), whereas you would expect values of at least 9 for well represented terms.

Table 1: Quality of the icon set for different terms based on assignment ratio (percentage of missing assignments) and conspicuity (or speed of picking icons).
TermQuality Indicator
Cut1
New7.5
Redo7.5
Undo7.5
Open8
Paste8.5
Remove9
Copy9
Link9
Save9.5
Add10
Print10
Search10

Table 2 shows a cross-table with the percentage of false associations. These are terms where the intended icon was not chosen by the users, but some other icon was.

Table 2: Cross-table of icons and terms with percentage of false associations. The direct match is inverted (1-value, e.g. 0.99 for Add) to obtain comparable data.

 Results_Icontest-Nitrux

Discussion

Nitrux is a special icon set. It tries to reduce things to the maximum. This works extremely well for standard metaphors like e.g. Save, Search or Print. Other metaphors seem to be reduced too much, like e.g. Redo, which actually is the standard symbol for Play, resulting in quite a lot of missing and wrong answers.

Most interesting though is the attempt of Nitrux to introduce new metaphors. For example an radical different approach was chosen for Cut, which makes a lot of sense when thinking about the idea, but simply does not work out for the users. Seeing that the scissors metaphor in other tests tends to get highest values, it would be an immediate advice to change this icon accordingly.

Another interesting case are New and Open. These are actually the icons where the used metaphor varies the most between different icon sets – and hence the differences in values scored also varies the most. Unfortunately in Nitrux the chosen metaphor for both does not reach good scores, whereas the New icon is a magnitude worse than the Open icon.

Last, Nitrux fails to introduce new metaphors for the icons that scored bad in the tests before: Copy & Paste. It sticks to the same metaphor as other icon sets do and the results are also comparable. We need to generally find new metaphors for these.

If you know how to design icons and would like to help us to identify metaphors that work better, please contact us. Also, all raw results are publicly available on our open usability platform UserWeave.

As mentioned before: These results only reflect the internal quality of the Nitrux icon set. The final interpretation will be done after all sets have been tested. So stay tuned and please participate in our follow-up tests. And, of course, feel free to discuss these findings with us.