Adaptive training set reduction for nearest neighbor classification

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/35673
Información del item - Informació de l'item - Item information
Título: Adaptive training set reduction for nearest neighbor classification
Autor/es: Rico-Juan, Juan Ramón | Iñesta, José M.
Grupo/s de investigación o GITE: Reconocimiento de Formas e Inteligencia Artificial
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Palabras clave: Editing | Condensing | Rank methods | Sorted prototypes selection | Adaptive pattern recognition | Incremental algorithms
Área/s de conocimiento: Lenguajes y Sistemas Informáticos
Fecha de publicación: 15-feb-2014
Editor: Elsevier
Cita bibliográfica: Neurocomputing. 2014, Accepted Manuscript, Available online 15 February 2014. doi:10.1016/j.neucom.2014.01.033
Resumen: The research community related to the human-interaction framework is becoming increasingly more interested in interactive pattern recognition, taking direct advantage of the feedback information provided by the user in each interaction step in order to improve raw performance. The application of this scheme requires learning techniques that are able to adaptively re-train the system and tune it to user behavior and the specific task considered. Traditional static editing methods filter the training set by applying certain rules in order to eliminate outliers or maintain those prototypes that can be beneficial in classification. This paper presents two new adaptive rank methods for selecting the best prototypes from a training set in order to establish its size according to an external parameter that controls the adaptation process, while maintaining the classification accuracy. These methods estimate the probability of each prototype of correctly classifying a new sample. This probability is used to sort the training set by relevance in classification. The results show that the proposed methods are able to maintain the error rate while reducing the size of the training set, thus allowing new examples to be learned with a few extra computations.
Patrocinador/es: This work is partially supported by the Spanish CICYT under project DPI2006-15542-C04-01, the Spanish MICINN through project TIN2009-14205-CO4-01 and by the Spanish research program Consolider Ingenio 2010: MIPRCV (CSD2007-00018).
URI: http://hdl.handle.net/10045/35673
ISSN: 0925-2312 (Print) | 1872-8286 (Online)
DOI: 10.1016/j.neucom.2014.01.033
Idioma: eng
Tipo: info:eu-repo/semantics/article
Revisión científica: si
Versión del editor: http://dx.doi.org/10.1016/j.neucom.2014.01.033
Aparece en las colecciones:INV - GRFIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2014_Rico-Inesta_Neurocomputing.pdfAccepted Manuscript (acceso abierto)463,9 kBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.