- Journal Home
- Volume 22 - 2025
- Volume 21 - 2024
- Volume 20 - 2023
- Volume 19 - 2022
- Volume 18 - 2021
- Volume 17 - 2020
- Volume 16 - 2019
- Volume 15 - 2018
- Volume 14 - 2017
- Volume 13 - 2016
- Volume 12 - 2015
- Volume 11 - 2014
- Volume 10 - 2013
- Volume 9 - 2012
- Volume 8 - 2011
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2008
- Volume 4 - 2007
- Volume 3 - 2006
- Volume 2 - 2005
- Volume 1 - 2004
Cited by
- BibTex
- RIS
- TXT
We introduce an optimization model of the support vector regression with the group lasso regularization and develop a class of efficient two-step fixed-point proximity algorithms to solve it numerically. To overcome the difficulty brought by the non-differentiability of the group lasso regularization term and the loss function in the proposed model, we characterize its solutions as fixed-points of a nonlinear map defined in terms of the proximity operators of the functions appearing in the objective function of the model. We then propose a class of two-step fixed-point algorithms to solve numerically the optimization problem based on the fixed-point equation. We establish convergence results of the proposed algorithms. Numerical experiments with both synthetic data and real-world benchmark data are presented to demonstrate the advantages of the proposed model and algorithms.
}, issn = {2617-8710}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/ijnam/10561.html} }We introduce an optimization model of the support vector regression with the group lasso regularization and develop a class of efficient two-step fixed-point proximity algorithms to solve it numerically. To overcome the difficulty brought by the non-differentiability of the group lasso regularization term and the loss function in the proposed model, we characterize its solutions as fixed-points of a nonlinear map defined in terms of the proximity operators of the functions appearing in the objective function of the model. We then propose a class of two-step fixed-point algorithms to solve numerically the optimization problem based on the fixed-point equation. We establish convergence results of the proposed algorithms. Numerical experiments with both synthetic data and real-world benchmark data are presented to demonstrate the advantages of the proposed model and algorithms.