Details
Original language | English |
---|---|
Pages (from-to) | 637-642 |
Number of pages | 6 |
Journal | IEEE Control Systems Letters |
Volume | 9 |
Early online date | 5 Jun 2025 |
Publication status | Published - 2025 |
Abstract
Low-rank matrix regression is a fundamental problem in data science with various applications in systems and control. Nuclear norm regularization has been widely applied to solve this problem due to its convexity. However, it suffers from high computational complexity and the inability to directly specify the rank. This work introduces a novel framework for low-rank matrix regression that addresses both unstructured and Hankel matrices. By decomposing the low-rank matrix into rank-1 bases, the problem is reformulated as an infinite-dimensional sparse learning problem. The least-angle regression (LAR) algorithm is then employed to solve this problem efficiently. For unstructured matrices, a closed-form LAR solution is derived with equivalence to a normalized nuclear norm regularization problem. For Hankel matrices, a real-valued polynomial basis reformulation enables effective LAR implementation. Two numerical examples in network modeling and system realization demonstrate that the proposed approach significantly outperforms the nuclear norm method in terms of estimation accuracy and computational efficiency.
Keywords
- identification, Low-rank approximation, model reduction, subspace methods
ASJC Scopus subject areas
- Engineering(all)
- Control and Systems Engineering
- Mathematics(all)
- Control and Optimization
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE Control Systems Letters, Vol. 9, 2025, p. 637-642.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Low-Rank Matrix Regression via Least-Angle Regression
AU - Yin, Mingzhou
AU - Muller, Matthias A.
N1 - Publisher Copyright: © 2017 IEEE.
PY - 2025
Y1 - 2025
N2 - Low-rank matrix regression is a fundamental problem in data science with various applications in systems and control. Nuclear norm regularization has been widely applied to solve this problem due to its convexity. However, it suffers from high computational complexity and the inability to directly specify the rank. This work introduces a novel framework for low-rank matrix regression that addresses both unstructured and Hankel matrices. By decomposing the low-rank matrix into rank-1 bases, the problem is reformulated as an infinite-dimensional sparse learning problem. The least-angle regression (LAR) algorithm is then employed to solve this problem efficiently. For unstructured matrices, a closed-form LAR solution is derived with equivalence to a normalized nuclear norm regularization problem. For Hankel matrices, a real-valued polynomial basis reformulation enables effective LAR implementation. Two numerical examples in network modeling and system realization demonstrate that the proposed approach significantly outperforms the nuclear norm method in terms of estimation accuracy and computational efficiency.
AB - Low-rank matrix regression is a fundamental problem in data science with various applications in systems and control. Nuclear norm regularization has been widely applied to solve this problem due to its convexity. However, it suffers from high computational complexity and the inability to directly specify the rank. This work introduces a novel framework for low-rank matrix regression that addresses both unstructured and Hankel matrices. By decomposing the low-rank matrix into rank-1 bases, the problem is reformulated as an infinite-dimensional sparse learning problem. The least-angle regression (LAR) algorithm is then employed to solve this problem efficiently. For unstructured matrices, a closed-form LAR solution is derived with equivalence to a normalized nuclear norm regularization problem. For Hankel matrices, a real-valued polynomial basis reformulation enables effective LAR implementation. Two numerical examples in network modeling and system realization demonstrate that the proposed approach significantly outperforms the nuclear norm method in terms of estimation accuracy and computational efficiency.
KW - identification
KW - Low-rank approximation
KW - model reduction
KW - subspace methods
UR - http://www.scopus.com/inward/record.url?scp=105007601918&partnerID=8YFLogxK
U2 - 10.1109/LCSYS.2025.3577081
DO - 10.1109/LCSYS.2025.3577081
M3 - Article
AN - SCOPUS:105007601918
VL - 9
SP - 637
EP - 642
JO - IEEE Control Systems Letters
JF - IEEE Control Systems Letters
ER -