Search results for key=ZSS2010a : 1 match found.

Search my entire BibTeX database
Output format: Text
BibTeX entry
     Combine using:
AND OR

Abstract icon Abstract BibTeX icon BibTeX entry Postscript icon Postscript PDF icon PDF PPT icon Powerpoint

Technical Reports

2010

  • @techreport{ZSS2010a,
    	vgclass =	{report},
    	author =	{Zaidi, Nayyar Abbas and David McG.\ Squire and David
    	Suter},
    	title =	{A Simple Gradient-based Metric Learning Algorithm for Object
    	Recognition},
    	number =	{2010/256},
    	institution =	{Clayton School of Information Technology, Monash
    	University},
    	address =	{Clayton Campus, Melbourne, 3800, Australia},
    	year =	{2010},
    	url =	{/publications/postscript/2010/tr-2010-256-full.pdf},
    	abstract =	{The Nearest Neighbor (NN) classification/regression
    	techniques, besides their simplicity, is one of the most widely applied
    	and well studied techniques for pattern recognition in machine
    	learning. Their only drawback is the assumption of the availability of
    	a proper metric used to measure distances to k nearest neighbors. It
    	has been shown that K-NN classifier's with a right distance metric can
    	perform better than other sophisticated alternatives like Support
    	Vector Machines SVM) and Gaussian Processes (GP) classifiers. That's
    	why recent research in k-NN methods has focused on metric learning
    	i.e., finding an optimized metric. In this paper we have proposed a
    	simple gradient based algorithm for metric learning. We discuss in
    	detail the motivations behind metric learning, i.e., error minimization
    	and margin maximization. Our formulation is different from the
    	prevalent techniques in metric learning where goal is to maximize the
    	classifier's margin. Instead our proposed technique (MEGM) finds an
    	optimal metric by directly minimizing the mean square error.  Our
    	technique not only resulted in greatly improving k-NN performance but
    	also performed better than competing metric learning techniques. We
    	also compared our algorithm's performance with that of SVM. Promising
    	results are reported on major faces, digits, object and UCIML
    	databases.},
    }