Content-based Image Retrieval is an area of growing interest. Various approaches exist which use color, texture, and shape for retrieving 'similar' images from a database. However, what do we mean by 'similar'. Traditionally, similarity is interpreted as distance in feature space. But this does not necessarily match the human users' expectations. We report on two human studies, which asked volunteers to select which imags they considered to be 'most like' each image from the Brodatz dataset. Although the images from the Brodatz set have the advantage of being an agreed standard in texture analysis, Brodatz certainly did not select his images with this in mind. The results from this study provide a justification for selecting a subset of the Brodatz data set for use in evaluating texture-based retrieval techniques. Images which humans have difficulty in agreeing which other images are 'most like' are also poor choices for comparison. Our result indicate which images are most likely to be classified as 'similar' by individual humans and that can also serve to evaluate computer-based retrieval techniques.