Search results for key=MMM2001a : 1 match found.

Search my entire BibTeX database
Output format: Text
BibTeX entry
     Combine using:
AND OR

Abstract icon Abstract BibTeX icon BibTeX entry Postscript icon Postscript PDF icon PDF PPT icon Powerpoint

Refereed full papers (journals, book chapters, international conferences)

2001

  • @inproceedings{MMM2001a,
    	vgclass =	{refpap},
    	vgproject =	{viper,cbir},
    	author =	{Henning M\"uller and Wolfgang M\"uller and St\'ephane
    	Marchand-Maillet and David McG.\ Squire and Thierry Pun},
    	title =	{A web-based evaluation system for content-based image
    	retrieval},
    	booktitle =	{Proceedings of the 3rd International Workshop on
    	Multimedia Information Retrieval (in conjunction with ACM Multimedia
    	2001)},
    	address =	{Ottawa, Canada},
    	pages =	{50--54},
    	month =	{October~5},
    	year =	{2001},
    	doi =	{http://dx.doi.org/10.1145/500933.500949},
    	url =	{/publications/postscript/2001/MuellerHMuellerWMarchandSquirePun_acmmir2001.pdf},
    	url1 =	{/publications/postscript/2001/MuellerHMuellerWMarchandSquirePun_acmmir2001.ps.gz},
    	abstract =	{This papers describes a benchmark test for content-based
    	image retrieval systems (CBIRSs) with the query by example (QBE) query
    	paradigm. This benchmark is accessible via the Internet and thus allows
    	to evaluate any image retrieval system which is compliant with the
    	Multimedia Markup Language (MRML) for query formulation and result
    	transmission. Thus it allows a quick and easy comparison between
    	different features and algorithms for CBIRSs.  The benchmark is not
    	only based on a standardized communication protocol to do the
    	communication between the benchmark server and the benchmarked system,
    	but it also uses a freely downloadable image database for the
    	evaluation to make the results reproducible. A CBIR system that uses
    	MRML and other components to develop MRML-based applications can be
    	downloaded free of charge as well. The evaluation is based on several
    	queries and known relevance sets for these queries. Several answer sets
    	for the same query image are possible if user judgments of several
    	users exist, thus almost any sort of user judgment can be incorporated
    	into the system. The final results are averaged over all the queries.
    	The evaluation of several steps of relevance feedback based on the
    	collected relevance judgments is also included into the benchmark. The
    	performance of relevance feedback is often regarded to be even more
    	important than the performance in the first query step because only
    	with relevance feedback the adaptation of the system to the users
    	subjective goal can be measured.  For the evaluation of a system with
    	relevance feedback, the same evaluation measures are used on the query
    	results as for the first query step.},
    }