- Journal Home
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
Tensor canonical decomposition (shorted as CANDECOMP/PARAFAC or CP) decomposes
a tensor as a sum of rank-one tensors, which finds numerous applications in signal
processing, hypergraph analysis, data analysis, etc. Alternating least-squares (ALS) is one
of the most popular numerical algorithms for solving it. While there have been lots of
efforts for enhancing its efficiency, in general its convergence can not been guaranteed.
In this paper, we cooperate the ALS and the trust-region technique from optimization
field to generate a trust-region-based alternating least-squares (TRALS) method for CP.
Under mild assumptions, we prove that the whole iterative sequence generated by TRALS
converges to a stationary point of CP. This thus provides a reasonable way to alleviate
the swamps, the notorious phenomena of ALS that slow down the speed of the algorithm.
Moreover, the trust region itself, in contrast to the regularization alternating least-squares
(RALS) method, provides a self-adaptive way in choosing the parameter, which is essential
for the efficiency of the algorithm. Our theoretical result is thus stronger than that of RALS
in [26], which only proved the cluster point of the iterative sequence generated by RALS
is a stationary point. In order to accelerate the new algorithm, we adopt an extrapolation
scheme. We apply our algorithm to the amino acid fluorescence data decomposition from
chemometrics, BCM decomposition and rank-($L_r$, $L_r$, 1) decomposition arising from signal
processing, and compare it with ALS and RALS. The numerical results show that TRALS is
superior to ALS and RALS, both from the number of iterations and CPU time perspectives.
Tensor canonical decomposition (shorted as CANDECOMP/PARAFAC or CP) decomposes
a tensor as a sum of rank-one tensors, which finds numerous applications in signal
processing, hypergraph analysis, data analysis, etc. Alternating least-squares (ALS) is one
of the most popular numerical algorithms for solving it. While there have been lots of
efforts for enhancing its efficiency, in general its convergence can not been guaranteed.
In this paper, we cooperate the ALS and the trust-region technique from optimization
field to generate a trust-region-based alternating least-squares (TRALS) method for CP.
Under mild assumptions, we prove that the whole iterative sequence generated by TRALS
converges to a stationary point of CP. This thus provides a reasonable way to alleviate
the swamps, the notorious phenomena of ALS that slow down the speed of the algorithm.
Moreover, the trust region itself, in contrast to the regularization alternating least-squares
(RALS) method, provides a self-adaptive way in choosing the parameter, which is essential
for the efficiency of the algorithm. Our theoretical result is thus stronger than that of RALS
in [26], which only proved the cluster point of the iterative sequence generated by RALS
is a stationary point. In order to accelerate the new algorithm, we adopt an extrapolation
scheme. We apply our algorithm to the amino acid fluorescence data decomposition from
chemometrics, BCM decomposition and rank-($L_r$, $L_r$, 1) decomposition arising from signal
processing, and compare it with ALS and RALS. The numerical results show that TRALS is
superior to ALS and RALS, both from the number of iterations and CPU time perspectives.