Search results for key=MMS2001 : 1 match found.

Search my entire BibTeX database
Output format: Text
BibTeX entry
     Combine using:

Abstract icon Abstract BibTeX icon BibTeX entry Postscript icon Postscript PDF icon PDF PPT icon Powerpoint

Refereed full papers (journals, book chapters, international conferences)


  • @article{MMS2001,
    	vgclass =	{refpap},
    	vgproject =	{cbir,viper},
    	author =	{Henning M\"{u}ller and Wolfgang M\"{u}ller and David McG.\
    	Squire and St\'{e}phane Marchand-Maillet and Thierry Pun},
    	title =	{Performance Evaluation in Content-Based Image Retrieval: {O}verview and Proposals},
    	journal =	{Pattern Recognition Letters},
    	volume =	{22},
    	number =	{5},
    	pages =	{593--601},
    	year =	{2001},
    	note =	{(special issue on Image/Video Indexing and Retrieval)},
    	doi =	{},
    	url1 =	{/publications/postscript/2000/},
    	url2 =	{/publications/postscript/2000/MuellerHMuellerWSquireMarchandPun_prl2000.pdf},
    	abstract =	{Evaluation of retrieval performance is a crucial problem
    	in content-based image retrieval (CBIR).  Many different methods for
    	measuring the performance of a system have been created and used by
    	researchers.  This article discusses the advantages and shortcomings of
    	the performance measures currently used.  Problems such as defining a
    	common image database for performance comparisons and a means of
    	getting relevance judgments (or ground truth) for queries are
    	The relationship between CBIR and information retrieval (IR) is made
    	clear, since IR researchers have decades of experience with the
    	evaluation problem.  Many of their solutions can be used for CBIR,
    	despite the differences between the fields.  Several methods used in
    	text retrieval are explained. Proposals for performance measures and
    	means of developing a standard test suite for CBIR, similar to that
    	used in IR at the annual Text REtrieval Conference (TREC), are