The Boosted Difference of Convex Functions Algorithm for Nonsmooth Functions

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/104647
Información del item - Informació de l'item - Item information
Título: The Boosted Difference of Convex Functions Algorithm for Nonsmooth Functions
Autor/es: Aragón Artacho, Francisco Javier | Vuong, Phan T.
Grupo/s de investigación o GITE: Laboratorio de Optimización (LOPT)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Matemáticas
Palabras clave: Difference of convex functions | Boosted difference of convex functions algorithm | Kurdyka--Łojasiewicz property | Clustering problem | Multidimensional scaling problem
Área/s de conocimiento: Estadística e Investigación Operativa
Fecha de publicación: 23-mar-2020
Editor: Society for Industrial and Applied Mathematics
Cita bibliográfica: SIAM Journal on Optimization. 2020, 30(1): 980-1006. doi:10.1137/18M123339X
Resumen: The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convex functions algorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. First, we show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Second, we show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka--Łojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the minimum sum-of-squares clustering problem, BDCA was on average 16 times faster than DCA, and for the multidimensional scaling problem, BDCA was 3 times faster than DCA.
Patrocinador/es: The first author was supported by MICINN of Spain and ERDF of EU, as part of the Ramón y Cajal program (RYC-2013-13327), and the grants MTM2014-59179-C2-1-P and PGC2018-097960-B-C22. The second author was supported by the FWF (Austrian Science Fund), Project M 2499-N32, and by the Vietnam National Foundation for Science and Technology Development (NAFOSTED), project 101.01-2019.320.
URI: http://hdl.handle.net/10045/104647
ISSN: 1052-6234 (Print) | 1095-7189 (Online)
DOI: 10.1137/18M123339X
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2020, Society for Industrial and Applied Mathematics
Revisión científica: si
Versión del editor: https://doi.org/10.1137/18M123339X
Aparece en las colecciones:INV - LOPT - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailAragon_Vuong_2020_SIAMJOptim_final.pdf11,93 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.