Adaptive training set reduction for nearest neighbor classification

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/35673
Información del item - Informació de l'item - Item information
Title: Adaptive training set reduction for nearest neighbor classification
Authors: Rico-Juan, Juan Ramón | Iñesta, José M.
Research Group/s: Reconocimiento de Formas e Inteligencia Artificial
Center, Department or Service: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Keywords: Editing | Condensing | Rank methods | Sorted prototypes selection | Adaptive pattern recognition | Incremental algorithms
Knowledge Area: Lenguajes y Sistemas Informáticos
Issue Date: 15-Feb-2014
Publisher: Elsevier
Citation: Neurocomputing. 2014, Accepted Manuscript, Available online 15 February 2014. doi:10.1016/j.neucom.2014.01.033
Abstract: The research community related to the human-interaction framework is becoming increasingly more interested in interactive pattern recognition, taking direct advantage of the feedback information provided by the user in each interaction step in order to improve raw performance. The application of this scheme requires learning techniques that are able to adaptively re-train the system and tune it to user behavior and the specific task considered. Traditional static editing methods filter the training set by applying certain rules in order to eliminate outliers or maintain those prototypes that can be beneficial in classification. This paper presents two new adaptive rank methods for selecting the best prototypes from a training set in order to establish its size according to an external parameter that controls the adaptation process, while maintaining the classification accuracy. These methods estimate the probability of each prototype of correctly classifying a new sample. This probability is used to sort the training set by relevance in classification. The results show that the proposed methods are able to maintain the error rate while reducing the size of the training set, thus allowing new examples to be learned with a few extra computations.
Sponsor: This work is partially supported by the Spanish CICYT under project DPI2006-15542-C04-01, the Spanish MICINN through project TIN2009-14205-CO4-01 and by the Spanish research program Consolider Ingenio 2010: MIPRCV (CSD2007-00018).
URI: http://hdl.handle.net/10045/35673
ISSN: 0925-2312 (Print) | 1872-8286 (Online)
DOI: 10.1016/j.neucom.2014.01.033
Language: eng
Type: info:eu-repo/semantics/article
Peer Review: si
Publisher version: http://dx.doi.org/10.1016/j.neucom.2014.01.033
Appears in Collections:INV - GRFIA - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
Thumbnail2014_Rico-Inesta_Neurocomputing.pdfAccepted Manuscript (acceso abierto)463,9 kBAdobe PDFOpen Preview


Items in RUA are protected by copyright, with all rights reserved, unless otherwise indicated.