Loading [MathJax]/extensions/tex2jax.js

Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Details

Original languageEnglish
Title of host publication2024 IEEE 63rd Conference on Decision and Control, CDC 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4157-4164
Number of pages8
ISBN (electronic)9798350316339
ISBN (print)979-8-3503-1634-6
Publication statusPublished - 16 Dec 2024
Event63rd IEEE Conference on Decision and Control, CDC 2024 - Milan, Italy
Duration: 16 Dec 202419 Dec 2024

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546
ISSN (electronic)2576-2370

Abstract

Intelligent real-world systems critically depend on expressive information about their system state and changing operation conditions, e.g., due to variation in temperature, location, wear, or aging. To provide this information, online inference and learning attempts to perform state estimation and (partial) system identification simultaneously. Current works combine tailored estimation schemes with flexible learningbased models but suffer from convergence problems and computational complexity due to many degrees of freedom in the inference problem (i. e., parameters to determine). To resolve these issues, we propose a procedure for data-driven offline conditioning of a highly flexible Gaussian Process (GP) formulation such that online learning is restricted to a subspace, spanned by expressive basis functions. Due to the simplicity of the transformed problem, a standard particle filter can be employed for Bayesian inference. In contrast to most existing works, the proposed method enables online learning of target functions that are nested nonlinearly inside a first-principles model. Moreover, we provide a theoretical quantification of the error, introduced by restricting learning to a subspace. A Monte-Carlo simulation study with a nonlinear battery model shows that the proposed approach enables rapid convergence with significantly fewer particles compared to a baseline and a state-of-the-art method.

ASJC Scopus subject areas

Cite this

Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline. / Ewering, Jan-Hendrik; Volkmann, Björn; Ehlers, Simon Friedrich Gerhard et al.
2024 IEEE 63rd Conference on Decision and Control, CDC 2024. Institute of Electrical and Electronics Engineers Inc., 2024. p. 4157-4164 (Proceedings of the IEEE Conference on Decision and Control).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Ewering, J-H, Volkmann, B, Ehlers, SFG, Seel, T & Meindl, MB 2024, Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline. in 2024 IEEE 63rd Conference on Decision and Control, CDC 2024. Proceedings of the IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers Inc., pp. 4157-4164, 63rd IEEE Conference on Decision and Control, CDC 2024, Milan, Italy, 16 Dec 2024. https://doi.org/10.1109/CDC56724.2024.10886241, https://doi.org/10.48550/arXiv.2409.09331
Ewering, J.-H., Volkmann, B., Ehlers, S. F. G., Seel, T., & Meindl, M. B. (2024). Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline. In 2024 IEEE 63rd Conference on Decision and Control, CDC 2024 (pp. 4157-4164). (Proceedings of the IEEE Conference on Decision and Control). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC56724.2024.10886241, https://doi.org/10.48550/arXiv.2409.09331
Ewering JH, Volkmann B, Ehlers SFG, Seel T, Meindl MB. Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline. In 2024 IEEE 63rd Conference on Decision and Control, CDC 2024. Institute of Electrical and Electronics Engineers Inc. 2024. p. 4157-4164. (Proceedings of the IEEE Conference on Decision and Control). doi: 10.1109/CDC56724.2024.10886241, 10.48550/arXiv.2409.09331
Ewering, Jan-Hendrik ; Volkmann, Björn ; Ehlers, Simon Friedrich Gerhard et al. / Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline. 2024 IEEE 63rd Conference on Decision and Control, CDC 2024. Institute of Electrical and Electronics Engineers Inc., 2024. pp. 4157-4164 (Proceedings of the IEEE Conference on Decision and Control).
Download
@inproceedings{3399651a939e4760bd6e120b32d8434b,
title = "Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline",
abstract = "Intelligent real-world systems critically depend on expressive information about their system state and changing operation conditions, e.g., due to variation in temperature, location, wear, or aging. To provide this information, online inference and learning attempts to perform state estimation and (partial) system identification simultaneously. Current works combine tailored estimation schemes with flexible learningbased models but suffer from convergence problems and computational complexity due to many degrees of freedom in the inference problem (i. e., parameters to determine). To resolve these issues, we propose a procedure for data-driven offline conditioning of a highly flexible Gaussian Process (GP) formulation such that online learning is restricted to a subspace, spanned by expressive basis functions. Due to the simplicity of the transformed problem, a standard particle filter can be employed for Bayesian inference. In contrast to most existing works, the proposed method enables online learning of target functions that are nested nonlinearly inside a first-principles model. Moreover, we provide a theoretical quantification of the error, introduced by restricting learning to a subspace. A Monte-Carlo simulation study with a nonlinear battery model shows that the proposed approach enables rapid convergence with significantly fewer particles compared to a baseline and a state-of-the-art method.",
author = "Jan-Hendrik Ewering and Bj{\"o}rn Volkmann and Ehlers, {Simon Friedrich Gerhard} and Thomas Seel and Meindl, {Michael Bernhard}",
note = "Publisher Copyright: {\textcopyright} 2024 IEEE.; 63rd IEEE Conference on Decision and Control, CDC 2024 ; Conference date: 16-12-2024 Through 19-12-2024",
year = "2024",
month = dec,
day = "16",
doi = "10.1109/CDC56724.2024.10886241",
language = "English",
isbn = "979-8-3503-1634-6",
series = "Proceedings of the IEEE Conference on Decision and Control",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "4157--4164",
booktitle = "2024 IEEE 63rd Conference on Decision and Control, CDC 2024",
address = "United States",

}

Download

TY - GEN

T1 - Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline

AU - Ewering, Jan-Hendrik

AU - Volkmann, Björn

AU - Ehlers, Simon Friedrich Gerhard

AU - Seel, Thomas

AU - Meindl, Michael Bernhard

N1 - Publisher Copyright: © 2024 IEEE.

PY - 2024/12/16

Y1 - 2024/12/16

N2 - Intelligent real-world systems critically depend on expressive information about their system state and changing operation conditions, e.g., due to variation in temperature, location, wear, or aging. To provide this information, online inference and learning attempts to perform state estimation and (partial) system identification simultaneously. Current works combine tailored estimation schemes with flexible learningbased models but suffer from convergence problems and computational complexity due to many degrees of freedom in the inference problem (i. e., parameters to determine). To resolve these issues, we propose a procedure for data-driven offline conditioning of a highly flexible Gaussian Process (GP) formulation such that online learning is restricted to a subspace, spanned by expressive basis functions. Due to the simplicity of the transformed problem, a standard particle filter can be employed for Bayesian inference. In contrast to most existing works, the proposed method enables online learning of target functions that are nested nonlinearly inside a first-principles model. Moreover, we provide a theoretical quantification of the error, introduced by restricting learning to a subspace. A Monte-Carlo simulation study with a nonlinear battery model shows that the proposed approach enables rapid convergence with significantly fewer particles compared to a baseline and a state-of-the-art method.

AB - Intelligent real-world systems critically depend on expressive information about their system state and changing operation conditions, e.g., due to variation in temperature, location, wear, or aging. To provide this information, online inference and learning attempts to perform state estimation and (partial) system identification simultaneously. Current works combine tailored estimation schemes with flexible learningbased models but suffer from convergence problems and computational complexity due to many degrees of freedom in the inference problem (i. e., parameters to determine). To resolve these issues, we propose a procedure for data-driven offline conditioning of a highly flexible Gaussian Process (GP) formulation such that online learning is restricted to a subspace, spanned by expressive basis functions. Due to the simplicity of the transformed problem, a standard particle filter can be employed for Bayesian inference. In contrast to most existing works, the proposed method enables online learning of target functions that are nested nonlinearly inside a first-principles model. Moreover, we provide a theoretical quantification of the error, introduced by restricting learning to a subspace. A Monte-Carlo simulation study with a nonlinear battery model shows that the proposed approach enables rapid convergence with significantly fewer particles compared to a baseline and a state-of-the-art method.

UR - http://www.scopus.com/inward/record.url?scp=86000505657&partnerID=8YFLogxK

U2 - 10.1109/CDC56724.2024.10886241

DO - 10.1109/CDC56724.2024.10886241

M3 - Conference contribution

AN - SCOPUS:86000505657

SN - 979-8-3503-1634-6

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 4157

EP - 4164

BT - 2024 IEEE 63rd Conference on Decision and Control, CDC 2024

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 63rd IEEE Conference on Decision and Control, CDC 2024

Y2 - 16 December 2024 through 19 December 2024

ER -

By the same author(s)