The yes/no detection task is a classic method used to probe the inner workings of how humans process information. In this paper I was interested in one quite specific experimental phenomenon of visual information processing: that of search asymmetries.

If you search for an item A amongst distracters B, then you will have some level of performance in being able to do this. If you now switch the search around and search for B amongst A’s, then if the performance is different, then you have found a search asymmetry. This does not always happen, it depends what image features you define A and B to be, but it seems to work for colours and motion.

Who cares? People have used the existence of search asymmetries to argue for particular mechanisms underlying how visual information is processed. They have argued for a 2-stage serial-parallel model (see the paper for details). For some reason that I am not sure of*, I took offence to this and instead found an alternative (purely parallel) mechanism to be a more appealing explanation.

It turned out that others have already made predictions that search asymmetries could be the result of having unequal levels of uncertainty about the visual items A and B. The question addressed in this paper was: can we find experimental support for this parallel ‘unequal uncertainty’ account of search asymmetry?

The answer was ‘yes’. This did not exclude the possibility that 2-stage mechanisms provide a better account, but 3 different purely parallel models were able to account for human performance data quite satisfactorily.

Vincent, B. (2011) Search asymmetries: parallel processing of uncertain sensory information, Vision Research, 51(15), 1741-1750.

* I prefer simpler explanations, and a purely parallel explanation is simpler than a parallel-then-serial explanation.