 Research article
 Open access
 Published:
A posteriori error estimation for model order reduction of parametric systems
Advanced Modeling and Simulation in Engineering Sciences volume 11, Article number: 5 (2024)
Abstract
This survey discusses a posteriori error estimation for model order reduction of parametric systems, including linear and nonlinear, timedependent and steady systems. We focus on introducing the error estimators we have proposed in the past few years and comparing them with the most related error estimators from the literature. For a clearer comparison, we have translated some existing error bounds proposed in function spaces into the vector space \({\mathbb {C}}^n\) and provide the corresponding proofs in \(\mathbb C^n\). Some new insights into our proposed error estimators are explored. Moreover, we review our newly proposed error estimator for nonlinear timeevolution systems, which is applicable to reducedorder models solved by arbitrary timeintegration solvers. Our recent work on multifidelity error estimation is also briefly discussed. Finally, we derive a new infsupconstantfree output error estimator for nonlinear timeevolution systems. Numerical results for three examples show the robustness of the new error estimator.
Introduction
For every model order reduction (MOR) method or algorithm to be eventually used in real applications, accuracy and efficiency of the method play key roles. While many MOR methods are numerically shown efficient, not all of them are guaranteed to be reliable. In other words, not all numerically demonstrated efficient MOR methods are associated with computable error estimators, let alone fasttocompute error estimators. This work reviews a posteriori error estimators for projectionbased MOR of parametric systems. Many projectionbased MOR methods for parametric systems [1] have been proposed, for example, the multimomentmatching methods [2, 3], methods based on (transfer function, projection matrix, or manifold) interpolation [4,5,6,7,8,9,10,11], the proper orthogonal decomposition (POD) methods [12,13,14], as well as the reduced basis methods [15,16,17,18]. We name those MOR methods for parametric systems pMOR methods. However, error estimation for some of the pMOR methods are not yet widely discussed, for example, error estimation for interpolationbased pMOR methods. While some a posteriori error bounds [15, 17,18,19,20,21,22,23,24,25,26,27,28] are proposed for reducedorder models (ROMs) obtained from the reduced basis method, most of them are derived using the weak form of the finite element method (FEM). In contrast, we proposed some a posteriori error estimators [29,30,31,32,33,34,35] which are independent of the numerical discretization method. The error estimators are expressed with the already discretized matrices and (nonlinear) vectors. Many of the existing error bounds or error estimators are applicable to ROMs constructed via global projection matrices, regardless which pMOR method is used for the ROM construction. For the reduced basis method, the projection matrix and the ROM are usually constructed via a greedy process. Multifidelity error estimation is recently proposed in [36] to accelerate the greedy algorithm for constructing the projection matrix.
We further discuss a newly proposed error estimator [35] which is independent of the numerical timeintegration scheme and therefore is able to estimate the error of the ROM solved with any time integrator. This is desired in many engineering applications, where often commercial software is used to solve the original dynamical systems. Then it is also desirable that the error estimator can be applied to measure the ROM error while the ROM is solved with the same software. However, existing error estimators (bounds) cannot achieve this, since they usually require a predefined nonadaptive timeintegration scheme. This limits the wide use of the error estimators (bounds).
Finally, we propose an infsupconstantfree output error estimator for nonlinear timeevolution systems, which avoids the computation of the smallest singular value \(\sigma _{\min }(\mu )\) of a large matrix at each queried sample of the parameter. This not only improves the accuracy of the error estimator for problems with \(\sigma _{\min }(\mu )\) close to zero, but also reduces a large amount of computations, as computing the singular value needs computational complexity of at least O(N) for each parameter sample, where N is usually large.
Most of the error estimation methods reviewed in this work are based either on the residual of the ROM approximation or on both the residual and a dual system. Such techniques of using the residual of an approximate solution and a dual system, can be traced back to error estimation for FEM approximations, see, e.g., [37].
For clarity, we summarize the new contributions of this survey, which cannot be found in the referenced articles:

Theorem 2. It transforms the error bound presented in function space in [19, 27] into an error bound in the vector space \({\mathbb {C}}^n\). New proofs are provided in Appendix.

Theorem 4. It derives an error bound with quadratic decay in \({\mathbb {C}}^n\).

Theorem 5 and its proof. It uses a slightly different dual system (25) and a slightly different auxiliary output \({{\tilde{y}}}^k(\mu )\) to derive the same output error bound as in [29, 30]. Please see Remark 8 for the detailed differences.

Theorem 7. It quantifies the state error estimator proposed in [38] with computable upper and lower bounds.

“Infsupconstantfree error estimator for timeevolution systems” section. It proposes a new infsupconstantfree output error estimator for parametric timeevolution systems.
In the next sections, we discuss error estimation for both timeevolution systems and steady systems. “Problem formulation” section formulates the problems considered in this work, including the original largescale models and the corresponding ROMs. We first review rigorous error bounds for both systems and provide some new proofs in “Rigorous a posteriori error bounds” section. Then in “A posteriori error estimators” section, we review error estimators which are not rigorous anymore, but decay faster than the error bounds. The error estimators usually also have less computational complexity than the error bounds. “Error estimator for ROMs solved with any blackbox timeintegration solver” section reviews the newly proposed error estimator that is applicable to blackbox solvers. The recently proposed multifidelity error estimation for large and complex systems is reviewed in “Multifidelity error estimation” section. It is shown that for some complex systems, the greedy process of constructing the reduced basis can be largely accelerated with multifidelity error estimation. “Infsupconstantfree error estimator for timeevolution systems” section proposes a new infsupconstantfree output error estimator for nonlinear timeevolution systems and presents numerical results. We conclude this survey in “Conclusion” section. This review is not exhaustive, but only contains our contribution to this topic and the most related ones from the literature. Other error estimators, in particular all error estimators for different types of systems, e.g., error estimation for ROMs of secondorder nonparametric systems [39, 40], are not discussed. The proper generalized decomposition (PGD) method [41] known as a nonprojectionbased MOR method, and the corresponding error estimation [42,43,44], are not considered in this survey either. The list of abbreviations is provided as below:

MOR: model order reduction

POD: proper orthogonal decomposition

FEM: finite element method

PDEs: partial differential equations

FOM: fullorder model

ROM: reducedorder model

PGD: proper generalized decomposition

IMEX: implicit–explicit

RBF: radial basis function

FFNN: feed forward neural network

TC1: the first test case

TC2: the second test case
Problem formulation
Consider the following parametric timeevolution system of differential algebraic equations (DAEs):
where \(t\in [0, T]\) and \(\mu \in {\mathcal {P}} \subset {\mathbb {R}}^{p}\), \({\mathcal {P}}\) is the parameter domain. \({\textbf{x}}(\mu )\in {\mathbb {R}}^N\) is the state vector of the system and \({\textbf{E}}(\mu ), {\textbf{A}}(\mu ) \in {\mathbb {R}}^{N\times N}, {\textbf{B}}(\mu ) \in {\mathbb {R}}^{N\times n_I}, {\textbf{C}}(\mu ) \in {\mathbb {R}}^{n_o \times N}, \forall \mu \in {\mathcal {P}}\), are the system matrices, \({\textbf{f}}: {\mathbb {R}}^N \times {\mathcal {P}} \mapsto {\mathbb {R}}^{N}\) is the nonlinear system operator and \({\textbf{u}}: t \mapsto {\mathbb {R}}^{n_I}\) is the external input signal. Such systems often arise from discretizing partial differential equations (PDEs) using numerical discretization schemes, or following some physical laws. System (1) is called the fullorder model (FOM) when we discuss MOR. The number of equations N in (1) is often very large to ensure highresolution of the underlying physical process. Numerically solving the FOM is expensive, especially for multiquery tasks, where the FOM has to be solved at many instances of \(\mu \). When \(n_I>1\) and \(n_O>1\), the system has multiple inputs and multiple outputs. Such problems are common in electrical or electromagnetic simulation [36]. When we consider error estimation, we usually first assume \(n_I=n_o=1\), then the obtained error estimation is extended to the more general case \(n_I>1\) and \(n_O>1\). The extension is straightforward if the error is measured using the matrixmax norm [31, 33, 38]. Therefore, if not mentioned explicitly, we consider the case \(n_I=n_o=1\) such that (1) can be written as
Here, the input signal u(t) and the output response \(y(t, \mu )\) become scalarvalued functions of time and \(\mu \), respectively. Consequently, the system matrices \({\textbf{b}} \in {\mathbb {R}}^N\) and \({\textbf{c}} \in {\mathbb {R}}^N\) are now vectors. All other quantities remain the same as in (1). We will briefly mention the extension to \(n_I>1\) and \(n_O>1\) at proper places. Projectionbased MOR techniques obtain a ROM for (2) in the following form:
where \({\textbf{V}} \in {\mathbb {R}}^{N \times n}\) is the parameter \(\mu \)independent projection matrix, whose columns are the reduced basis vectors.
are the reduced system matrices. \({\hat{{\textbf{f}}}}(\cdot ,\cdot )={\textbf{V}}^T{\textbf{f}}(\cdot ,\cdot )\) is the reduced nonlinear vector. The number of equations n in (3) should be much smaller than N in (2), i.e., \(n \ll N\), so that the ROM can be readily used for repeated simulations. When \({\textbf{V}} = {\textbf{W}}\), it is referred to as Galerkin projection. We focus on Galerkin projection, though the error estimators discussed in this work straightforwardly apply to Petrov–Galerkin projection, too.
For steady problems, the parametric system is timeindependent,
where \({\textbf{x}}(\mu ) \in {\mathbb {R}}^N\), and \({\textbf{f}}: {\mathbb {R}}^N \times {\mathbb {C}}^{p} \mapsto {\mathbb {R}}^N\) is the nonlinear system operator. Projectionbased pMOR obtains a steady parametric ROM as below,
where \({\hat{{\textbf{f}}}}(\cdot , \cdot )={\textbf{V}}^T{\textbf{f}}(\cdot ,\cdot )\). When the system is linear, the steady system then becomes
where \({\textbf{M}}(\mu ) \in {\mathbb {R}}^{N\times N}, \forall \mu \in {\mathcal {P}}.\) The corresponding steady parametric ROM is
where \({\hat{{\textbf{M}}}}(\mu )={\textbf{V}}^T{\textbf{M}}(\mu ){\textbf{V}}\).
In the following, we mainly discuss error estimation on the solutions obtained from the ROMs (3) and (7). The norm \(\Vert \cdot \Vert \) refers to the vector 2norm or matrix spectral norm all through the article. The ROM in (3) or (7) is constructed using a global reduced basis \({\textbf{V}}\). The error estimation methods reviewed in this work could be applied to measure the error of the ROM obtained using a global reduced basis, irrespective of the method used to construct the reduced basis. In this sense, the error estimation methods are generic and could be applicable to multimomentmatching methods, POD methods, reduced basis methods and some interpolationbased methods.
Remark 1
We point out that if the FOMs (1), (2), (4), (6) are obtained from numerical discretization of PDEs, the error estimation discussed in this work and those in most of the referenced works in the introduction, do not involve the discretization error. This is the case for most of the reduced basis method in the literature. As the spatial discretization and the model reduction are mostly two separate steps, this is common practice. We note that in case of knowledge of the discretization error, e.g., in adaptive FEM, one can adapt the model reduction error tolerance to this error so that model reduction does not contribute further to the magnitude of the approximation error, by, e.g., choosing the model reduction tolerance to be 1–2 orders of magnitude lower than the discretization error. This is common practice, but beyond the scope of this paper. For works on error estimation including both the discretization error and the ROM error, please refer to [45,46,47]. The error estimation reviewed in this work could be combined with the discretization error estimator [37] to realize adaptivity of the mesh size by checking the two estimated errors respectively, during a joint greedy process for both spatial discretization and MOR. Moreover, there are FOMs that are not derived by numerical discretization of PDEs, rather from some physical laws, for example, the modified nodal analysis (MNA) in circuit simulation directly results in systems of DAEs. For such systems, we consider the solutions to the FOMs as the exact solutions.
Rigorous a posteriori error bounds
This section reviews rigorous a posteriori error bounds for estimating the ROM error, which are upper bounds of the true errors and therefore are rigorous. For timeevolution systems, most of the approaches estimate the error at discrete time instances. There are error bounds for state error and error bounds for output error. Output error bounds usually need a dual system to achieve faster decay. We review the error bound for timeevolution systems and steady systems in separate subsections.
Error bounds for timeevolution systems
The standard error estimation approaches proposed for the reduced basis method are residualbased [17,18,19, 23, 24]. In order to derive the error bound, knowledge of the temporal discretization scheme used to integrate the FOM and the ROM is assumed, e.g., using implicit Euler, CrankNicolson method, or an implicit–explicit (IMEX) method. Computing the error bound involves determining the residual vector \({\textbf{r}}(\mu ) \in {\mathbb {R}}^{N}\) at each time instance. Some goaloriented output error estimation approaches also require the residual of a dual system.
Suppose (2) is discretized in time using a firstorder IMEX scheme [48]. The linear part is discretized implicitly, while the nonlinear vector \({\textbf{f}}({\mathbf {x(t,\mu )}}, \mu )\) is evaluated explicitly. The resulting discretized system is
with \({\textbf{A}}_t(\mu ):= {\textbf{E}}(\mu )  \delta t {\textbf{A}}(\mu )\). Here, \(\delta t\) is the temporal discretization step. Error estimation methods discussed in this work may also apply to timevarying \(\delta t\). For simplicity, we use \(\delta t\) to represent the timevarying case, too. The ROM (3) can be discretized in the same way as
where \({\hat{{\textbf{A}}}}_t(\mu ):= {\hat{{\textbf{E}}}}(\mu )  \delta t {{\hat{{{{\textbf{A}}}}}}}(\mu )\). The residual from the ROM approximation is computed by substituting the approximate state vector \(\tilde{{\textbf{x}}}^{k}(\mu ):={\textbf{V}} {\hat{{\textbf{x}}}}^{k}(\mu )\) into (8). The resulting residual at the kth time step, \({\textbf{r}}^{k}(\mu )\) is
The nonlinear part of the ROM (9) is not yet hyperreduced. When hyperreduction [49, 50], e.g., discrete empirical interpolation (DEIM), is applied to (9), we get the ROM in the form,
where \({\mathcal {I}}_{\textbf{f}}^k(\mu )\) is an approximation of \({\textbf{f}}({\textbf{V}} {{\hat{{{{\textbf{x}}}}}}}^k(\mu ), {\mu })\). It is clear that in order to obtain the residual \({\textbf{r}}^{k}(\mu )\), the temporal discretization scheme for the ROM should be the same as that for the FOM so that \({\tilde{{\textbf{x}}}}^{k}( \mu )\) in (10) and \({\textbf{x}}^{k}(\mu )\) in (8) correspond to the same time instance \(t^{k}\).
State error bound
An a posteriori error bound \(\Delta (\mu )\) for the approximation error \({\textbf{e}}^{k}({\mu }):= {\textbf{x}}^{k}({\mu })  {\tilde{{\textbf{x}}}}^{k}(\mu )\) can be computed based on the residual as below.
Theorem 1
(Residualbased error bound) Suppose that the nonlinear quantity \({\textbf{f}}({\textbf{x}}(t, \mu ), {\mu })\) is Lipschitz continuous in the first argument for all \({\mu }\) such that there exists a constant \(L_{\textbf{f}}\) for which
Further assume that for any parameter \({\mu }\) the projection error at the first time step is \(\Vert {\textbf{e}}^{0}({\mu }) \Vert = \Vert {\textbf{x}}^{0}({\mu })  {\tilde{{\textbf{x}}}}^{0}({\mu }) \Vert =\Vert {\textbf{x}}^{0}({\mu })  {\textbf{V}} {\textbf{V}}^T {\textbf{x}}^{0}({\mu }) \Vert \), and \({\textbf{A}}_t(\mu )\) is invertible, \(\forall \mu \in {\mathcal {P}}\). The error of the approximate state vector \({\tilde{{\textbf{x}}}}\) at the kth time step, \(\Vert {\textbf{e}}^{k}(\mu ) \Vert = \Vert {\textbf{x}}^{k}(\mu )  {\tilde{{\textbf{x}}}}^{k}(\mu ) \Vert \) is given by:
where \(\zeta (\mu ):= \Vert {\textbf{A}}_t (\mu )^{1} \Vert \) and \(\xi (\mu ):=\Vert {\textbf{A}}_t(\mu )^{1} {\textbf{E}}(\mu ) \Vert + \delta t L_{\textbf{f}} \Vert {\textbf{A}}_t(\mu )^{1} \Vert \).
Proof
A proof for the above theorem can be found in [35].\(\square \)
Remark 2
When the system has multiple inputs, then the state error bound corresponding to each column of \({\textbf{B}}(\mu )\) can be obtained from Theorem 1. The final state error is taken as the maximum over all the columnwise derived state error bounds.
In [16, 18, 19, 51], similar state error bounds using the residual \({\textbf{r}}^k(\mu )\) are derived, where only linear systems are considered in [16, 18]. The error bound proposed in [19] for nonlinear systems includes the error of hyperreduction for the nonlinear function \({\textbf{f}}({\textbf{x}}(t, \mu ), \mu )\), such that there is an additional term in the error bound. The error bound [19] is expressed in function space, and is not straightforward to be translated into the vector space \({\mathbb {C}}^n\) as we consider here. An error bound in the vector space \({\mathbb {C}}^n\) by considering hyperreduction is provided in [29, 52]. A state error bound based on implicit temporal discretization scheme is proposed in [51], where the hyperreduction error is also considered. In summary, all the error bounds discussed in [16, 18, 19, 29, 51, 52] and the one in Theorem 1 involve summing up the residual \({\textbf{r}}^k(\mu )\) at discrete time steps.
Remark 3
In [24], state error bound for the linear version of the timecontinuous ROM in (3) is derived, i.e., the nonlinear functions in (2) and in (3) are both assumed to be zero. The error bound is also a function of continuous time and continuous parameter. The sum of the residual \( \Vert {\textbf{r}}^{i}(\mu ) \Vert \) over discrete time steps becomes the integral of the residual over the time interval [0, T].
Output error bound
A straightforward output error bound for the output error
can be derived from (12) of Theorem 1 by noticing that \(\Vert {\textbf{e}}_o^{k}({\mu })\Vert \le \Vert {\textbf{c}}\Vert \Vert {\textbf{e}}^{k}({\mu })\Vert \) [24]. Finally, we have
The above output error bound is nevertheless rather conservative, especially when \(\Vert {\textbf{c}}\Vert \) is large. Moreover, the error bound depends only on \(\Vert {\textbf{r}}^{i}(\mu ) \Vert \), i.e., the primal residual, leading to a linear decay.
Primaldualbased output error bounds are obtained in [19, 24, 26, 27] for linear timeevolution systems, so that the resulting error bounds possess quadratic decay w.r.t. both the primal residual and the dual residual. The output error bound in [19, 26, 27] is described in function space based on the weak form of the original PDEs. To be consistent with the system (9) using matrices and vectors, we transform the error bound in [19] into the vector space \({\mathbb {C}}^n\) for the ROM in (9). Theorem 2 shows the interpreted error bound. The assumptions of the theorem correspond to those assumptions in [19] in function space. No additional or stronger limitations are assumed in Theorem 2. Although the proof for the theorem can be done by more or less using the idea of the proof in [19], it is very different and is therefore provided in this work. Note that the proof in [19] is divided into several lemmas, thus consists of a sequence of proofs.
Theorem 2
(Output error bound for linear systems) Given a linear FOM in (8), where \({\textbf{f}}({\textbf{x}}^k(\mu ), \mu )=0\), consider the output error of its ROM in (9) where \({\hat{{\textbf{f}}}}({\textbf{V}}{\hat{{\textbf{x}}}}^k(\mu ), \mu )=0\). Assume that there is no error at the initial condition, \({\textbf{e}}^0(\mu )={\textbf{x}}^0(\mu ){\tilde{{\textbf{x}}}}^0(\mu )=0\), and both the matrices \({\textbf{A}}(\mu )\) and \({\textbf{E}}(\mu )\) are symmetric positive definite. An error bound for the error of a corrected output of the ROM is
where \({{\hat{{{{\textbf{y}}}}}}}_c^{k}({\mu })={{\hat{{{{\textbf{y}}}}}}}^{k}({\mu })+\sum \limits _{j=0}^{k1} ({\tilde{{\textbf{x}}}}_{du}^{Kk+j}(\mu ))^T{\textbf{r}}^{j}(\mu )\) is a corrected output of \({{\hat{{{{\textbf{y}}}}}}}^{k}({\mu })\). Here, \(\Vert {\textbf{r}}_{du}^{j}(\mu ) \Vert \) denotes the residual at the jth time instance induced by the ROM of the dual system
and \(\Gamma ({\mu })\) is a parameterdependent scalar. \(\delta _{du}^K({\mu })\) is a scaled upper bound for the dual ROM state error \(\Vert {\textbf{x}}_{du}^K(\mu ){\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^K(\mu )\Vert ^2\) at the final time step \(t=t_K\). The dual ROM is defined as
where \({\hat{{\textbf{A}}}}_{du}(\mu )={\textbf{V}}_{du}^T {\textbf{A}}_t(\mu )^T {\textbf{V}}_{du}\), \({\hat{{\textbf{E}}}}_{du}(\mu )={\textbf{V}}_{du}^T {\textbf{E}}(\mu )^T {\textbf{V}}_{du}\), \({\hat{{\textbf{c}}}}_{du}={\textbf{V}}_{du}^T{\textbf{c}}(\mu )^T\).
Proof
See Appendix. \(\square \)
Remark 4
For multiple input and multiple output systems, an output error bound corresponding to each column of \({\textbf{B}}(\mu )\) and each row of \({\textbf{C}}(\mu )\) can be derived from Theorem 2. Then the final output error bound is taken as the maximum over all columnrowwise derived output error bounds. Please refer to a more detailed derivation for steady systems in the next “Error bounds for steady systems” section.
Remark 5
In [24], a similar primaldual based output error bound is obtained for the timecontinuous ROM in (3). The output error bound estimates the output error at the final time T. The sums of the primal residual \({\textbf{r}}(\mu )^k\) and the dualresidual \({\textbf{r}}_{du}^k\) over time instances in (14) then become two integrals integrating the time variable from 0 to T. The initial approximation error \(\Vert {\textbf{e}}^{0}({\mu })\Vert _2\) was assumed to be zero in [19], while it exists in the error bound in [24]. \(\Gamma (\mu )\) is also differently defined in [24]. In contrast to [24] where the error estimation is derived in the vector space \({{\mathbb {C}}}^n\), in [26] a timecontinuous output error bound is derived in the function space based on the weak form of the original PDEs. The error bounds proposed in [15, 24, 27] are also reviewed in the survey paper [28] on the reduced basis method.
Remark 6
Theorem 2 is restricted in the sense that both \({\textbf{E}}(\mu )\) and \({\textbf{A}}(\mu )\) are assumed to be symmetric positive definite. Our proposed error estimators to be discussed in “A posteriori error estimators” section do not need this assumption.
The primaldual based output error bound in (14) has a quadratic behavior in the sense that it is the multiplication of the primal residuals with the dualresiduals. Therefore, it is expected that the error bound decays faster than the primalonly output error bound in (13). Note that all the above reviewed error bounds estimate the error by accumulating the residuals over time.
Error bounds for steady systems
In this subsection, we discuss error bounds for steady systems as in (6). Analogous to the timeevolution systems, the error bounds for both the state error and the output error also rely on the spectral norm of \(\Vert {\textbf{M}}^{1}(\mu )\Vert \), i.e., the smallest singular value of the matrix \({\textbf{M}}(\mu )\) for any given \(\mu \).
State error bound
The state error bound for the state error \({\textbf{e}}(\mu )={\textbf{x}}(\mu ){\textbf{V}} {\hat{{\textbf{x}}}}(\mu )\) can be easily derived by noticing that
Finally,
Similar error bounds for the state error have been proposed for the reducedbasis method [17, 18] based on the weak form of the PDEs and they are written in the functional form. Here, we derive the error bound in the vector space \({\mathbb {C}}^n\) for the spatially descritzed system (6) written using matrices and vectors. For systems with multiple inputs, \({\textbf{b}}(\mu )\) is a matrix, then \({\textbf{e}}(\mu )\) is also a matrix. Considering the ith column \({\textbf{e}}_i(\mu )\) of \({\textbf{e}}(\mu )\), we get [38]
where \({\textbf{r}}_i(\mu )\) is the ith column of \({\textbf{r}}(\mu )\) The final bound \(\Delta _s(\mu )\) is then defined as \(\Delta _s(\mu ):=\max _i \Vert {\textbf{e}}_i(\mu )\Vert \) for multiple input systems. In [17, 23, 53], error bounds for the nonlinear steady systems (4) and (5) are also obtained, where \(\Vert {\textbf{M}}(\mu )^{1}\Vert \) on the righthand side of (18) is replaced by the smallest singular value of a properly defined Jacobian matrix in [17], whereas in [23], it is replaced by a lower bound on the coercivity constant of a linear operator. In [53], with some assumptions, \(\Vert {\textbf{e}}(\mu )\Vert \) is bounded as
where \({\hat{{\textbf{e}}}}(\mu )\) is a properly computed approximation of \({\textbf{e}}(\mu )\) which is the solution to the residual system
Here, \({\textbf{J}}_f(\mu )\) is the Jacobian matrix of \({\textbf{f}}\) in (4) w.r.t. \({\hat{{\textbf{x}}}}(\mu )\). In (19), \({\textbf{r}}_r(\mu ):={\textbf{r}}(\mu ){\textbf{J}}_f {\hat{{\textbf{e}}}}(\mu )\) is the residual induced by the approximation \({\hat{{\textbf{e}}}}(\mu )\) to \({\textbf{e}}(\mu )\).
In summary, the error bound \(\Delta _s(\mu )\) as well as the error bounds derived in [17, 18, 23] all depend on the spectral norm of a properly defined matrix, or a coercivity constant (usually a lower bound of it), which entail computational complexity depending on the large dimension N. Sometimes, \(\Vert {\textbf{M}}(\mu )^{1}\Vert \) or the coercivity constant is so small leading to very rough error estimation.
Output error bound
Analogous to the timeevolution systems, estimating the output error \({\textbf{y}}(\mu ){\hat{{\textbf{y}}}}(\mu )\) for the ROM (7) is also based on a dual system and its ROM defined respectively as
where \({\hat{{\textbf{M}}}}(\mu )={\textbf{V}}_{du}^T {\textbf{M}}^T(\mu ){\textbf{V}}_{du}, {\hat{{\textbf{c}}}}(\mu )={\textbf{V}}_{du}^T{\textbf{c}}^T(\mu ).\) The following Theorem states an output error bound using the dual system (20) and its ROM (21).
Theorem 3
(Output error bound for linear steady systems [32, 33]) The output error \({\textbf{e}}_o(\mu ):={\textbf{y}}(\mu ){\hat{{\textbf{y}}}}(\mu )\) of the ROM (7) is bounded by
Here \({\textbf{r}}_{du}(\mu )={\textbf{c}}^T(\mu ){\textbf{M}}^T(\mu ){\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )\).
Proof
See [33]. \(\square \)
The above primaldual based output error bound is motivated by the primaldual error bounds early proposed in [20,21,22], etc, though the derivations in [20,21,22] are in function space and therefore are different. An even earlier proposed primaldual based output error bound in function space can be found in [25]. For systems with multiple inputs and multiple outputs, an error bound with matrixmax norm can be derived [33]. To this end, we first get the error bound for the (i, j)th entry of the output \({\textbf{e}}_o(\mu )\) matrix, which can be straightforwardly derived from (22), i.e.,
where \({\textbf{r}}_{du,i}(\mu ):={\textbf{c}}_i^T(\mu ){\textbf{M}}^T(\mu ){\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du, i}(\mu )\) is derived from the ith row \({\textbf{c}}_i^T(\mu )\) of the matrix \({\textbf{c}}(\mu )\) and \({\textbf{r}}_j(\mu ):={\textbf{b}}_j(\mu ){\textbf{M}}(\mu ) {\textbf{V}} {\hat{{\textbf{x}}}}_j(\mu )\) is defined by the jth column \({\textbf{b}}_j(\mu )\) of \({\textbf{b}}(\mu )\). Here, \({\hat{{\textbf{x}}}}_{du, i}(\mu )\) and \({\hat{{\textbf{x}}}}_j(\mu )\) are the ith and jth columns of \({\hat{{\textbf{x}}}}_{du}(\mu )\) and \({\hat{{\textbf{x}}}}(\mu )\), respectively. The error bound for \(\Vert {\textbf{e}}_o(\mu )\Vert _{\max }\) is defined as
The error bounds in (22) and (23) do not quadratically decay, since there is a second term which is not a quadratic function of the two residual norms. However, with some modifications or assumptions, we can derive error bounds that are quadratic.
Theorem 4
(Output error bound for linear steady systems with quadratic behavior) If \({\textbf{b}}(\mu )={\textbf{c}}^T(\mu )\),
When \({\textbf{b}}(\mu )\ne {\textbf{c}}^T(\mu )\), if we modify the output of the ROM in (7) to \({\bar{{\textbf{y}}}}(\mu )={\hat{{\textbf{y}}}}(\mu )+({\textbf{V}}_{du} {\hat{{\textbf{x}}}}_{du})^T{\textbf{r}}(\mu )\), then the output error bound for the output error \({\textbf{y}}(\mu ){\bar{{\textbf{y}}}}(\mu )\) becomes
Proof
When \({\textbf{b}}(\mu )={\textbf{c}}^T(\mu )\), the dual system (20) is the same as the primal system (6), so that \({\textbf{V}}_{du}={\textbf{V}}\) and \({\hat{{\textbf{x}}}}_{du}(\mu )={\hat{{\textbf{x}}}}(\mu )\). This leads to \(\Vert [{\textbf{V}}_{du} {\hat{{\textbf{x}}}}_{du}(\mu )]^T{\textbf{r}}(\mu )\Vert =0\) in (22) or \(\Vert [{\textbf{V}}_{du} {\hat{{\textbf{x}}}}_{du}(\mu )]^T{\textbf{r}}_j(\mu )\Vert =0\) in (23). When \({\textbf{b}}(\mu )\ne {\textbf{c}}^T(\mu )\), we have
\(\square \)
With the corrected output, the output error bound has a quadratic behavior. The same technique was previously used in [20,21,22, 25] for error analysis in function space. The error bound in (24) is in agreement with the analysis in, e.g., [20]. It is worth pointing out that using a corrected output to obtain error estimation with quadratic decay was early proposed for the finite element method (FEM) [37]. Analogous to the state error bound in (18), the smallest singular value of \({\textbf{M}}(\mu )\) for any given \(\mu \) must be computed in order to compute the output error bound.
A posteriori error estimators
This section discusses a posteriori error estimators that may loose the rigorousness of the error bound. However, these error estimators try to reduce the big gap between the error bounds and the true error occurring in many problems. Usually the ratio \(\frac{\text {error bound}}{\text {true error}}\) or \(\frac{\text {error estimator}}{\text {true error}}\) is considered as the effectivity of an error bound/estimator. A posteriori error estimators discussed in this section are aimed to have effectivities \(\frac{\text {error estimator}}{\text {true error}}\) closer to 1 as compared to those \(\frac{\text {error bound}}{\text {true error}}\) of the error bounds. At the same time, they usually have less computational complexity than the error bound. In the following subsections, we also separately discuss error estimation for timeevolution systems and that for steady systems.
Error estimators for timeevolution systems
The error estimators discussed in this subsection aim to estimate the error of the timediscrete ROM in (9). The works in [29,30,31] propose output error estimators which avoid accumulating (summing up) the residuals over the time evolution, resulting in much tighter error estimators than the primaldual based error bounds in “Error bounds for timeevolution systems” section. Furthermore, the output error estimators in [29, 30] apply to both nonlinear and linear systems. For nonlinear systems, the error estimators could also include the approximation error of hyperreduction. We review those error estimators in the following theorems.
Output error estimators
The output error estimators needs a dual system defined as,
The ROM of the dual system can be derived by
where \({\hat{{\textbf{A}}}}_t(\mu )={\textbf{V}}_{du}^T{\textbf{A}}_t(\mu )^T{\textbf{V}}_{du}, {\hat{{\textbf{c}}}}(\mu )={\textbf{V}}_{du}^T{\textbf{c}}(\mu )^T\).
Remark 7
The dual system defined in (25) is slightly different from that in [29,30,31], where the righthand side is \({\textbf{c}}(\mu )^T\) instead of \({\textbf{c}}(\mu )^T\). To be consistent with the definition of the corrected output \({\bar{{\textbf{y}}}}(\mu )\) for the steady systems in Theorem 4, we use the dual system (25), based on which the corrected output for the nonlinear timeevolution systems to be defined later will have a uniform form as the corrected output for the steady systems.
The residual induced by the approximate solution \({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}\) computed from the ROM in (26) is
Define an auxiliary residual:
It can be seen that the differences of \({\tilde{{\textbf{r}}}}^{k}(\mu )\) from \({\textbf{r}}^{k}(\mu )\) in (10) are that \({\textbf{f}}({\tilde{{\textbf{x}}}}^{k1}(\mu ))\) in \({\textbf{r}}^{k}(\mu )\) is replaced by \({\textbf{f}}({\textbf{x}}^{k1}(\mu ), \mu )\), i.e., \({\textbf{f}}\) applied to the true solution \({\textbf{x}}^{k1}(\mu )\) and \({\textbf{E}}(\mu ){\tilde{{\textbf{x}}}}^{k1}(\mu )\) is replaced by \({\textbf{E}}(\mu ){\textbf{x}}^{k1}(\mu )\). With \({\tilde{{\textbf{r}}}}^{k}(\mu )\), we will derive a direct relation between \({\tilde{{\textbf{r}}}}^{k}(\mu )\) and the state error \({\textbf{x}}^k(\mu ) {\tilde{{\textbf{x}}}}^k(\mu )\). This relation will aid the derivation of the output error estimation.
Theorem 5
(Primaldual based output error estimator [29, 30]) For the timediscrete FOM (8) and the timediscrete ROM (9), assume that \(A_t(\mu )\) is invertible for any \(\mu \in {\mathcal {P}}\), then the output error \(e_o^k(\mu )={\textbf{y}}^k(\mu ) {\hat{{\textbf{y}}}}^k(\mu )\) at the time instance \(t^k\) can be bounded as
where \({{\tilde{\rho }}}^k(\mu )=\Vert {\tilde{{\textbf{r}}}}^k(\mu )\Vert /\Vert {\textbf{r}}^k(\mu )\Vert \) and \(\Phi ^k(\mu )=\Vert {\textbf{A}}_t(\mu )^{1}\Vert \Vert {\textbf{r}}_{du}^k(\mu )\Vert +\Vert {\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu )\Vert \).
Proof
From the dual system (25), we have
Multiplying \({\textbf{x}}^k(\mu ){\tilde{{\textbf{x}}}}^k(\mu )\) from the right on both sides of (29), we get
Recalling the definition of \({\tilde{{\textbf{r}}}}^k(\mu )\) in (7), we have
We combine the above two equations to obtain
Defining a new vector \({\tilde{{\textbf{y}}}}^k (\mu )={\hat{{\textbf{y}}}}^k(\mu )+({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu ))^T{\tilde{{\textbf{r}}}}^k(\mu )\) yields
Inserting the relation
into (31), leads to
Finally,
Defining \({{\tilde{\rho }}}^k(\mu )=\Vert {\tilde{{\textbf{r}}}}^k(\mu )\Vert /\Vert {\textbf{r}}^k(\mu )\Vert \), we obtain the desired error bound. \(\square \)
However, \({\tilde{{\textbf{r}}}}^k(\mu )\) involves the true solution vector \({\textbf{x}}^k(\mu )\), making \({{\tilde{\rho }}}^k(\mu )\) not computable. In [29, 30], the nonlinear function \({\textbf{f}}({\textbf{x}}(t, \mu ), \mu )\) is assumed to be Lipschitz continuous w.r.t. \({\textbf{x}}(t, \mu )\), in order to derive a computable estimation \(\rho (\mu )\) for \({{\tilde{\rho }}}^k(\mu )\). The output error can then be estimated as [29, 30]
The details of computing \(\rho (\mu )\) can be found in [29, 30].
Remark 8
Recall that the dual system defined in (26) is a bit different, therefore the proof in [29, 30] does not directly apply here. Although the proof above is similar to the proofs in [29, 30], it is not the same. In particular the variable \({\tilde{{\textbf{y}}}}^k(\mu )\) is different, which is \({\tilde{{\textbf{y}}}}^k(\mu )={\hat{{\textbf{c}}}}(\mu )^T {\hat{{\textbf{x}}}}^k(\mu )({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu ))^T {\tilde{{\textbf{r}}}}^k(\mu )\) in [29, 30].
When hyperreduction is applied to the ROM (9), we get the hyperreduced ROM in (11). An output error estimation for the hyperreduced ROM is derived in [29, 30] as
where \({\textbf{r}}_{{\mathcal {I}}}^k(\mu ):={\textbf{E}}(\mu ) {\textbf{V}} {\hat{{\textbf{x}}}}^{k1}(\mu ) + \delta t \big ({\mathcal {I}}^{k1}_{\textbf{f}}(\mu ) + {{{\textbf{b}}}} u^{k}\big ){\textbf{A}}_t {\textbf{V}}\hat{\textbf{x}}^{k}(\mu ),\) and \(\epsilon ^k_{{\mathcal {I}}}(\mu )\) is an estimation of the interpolation error \(\Vert \delta t[{\textbf{f}}({\textbf{x}}^k(\mu ), \mu ){\mathcal {I}}_{\textbf{f}}^k(\mu )]\Vert \). The quantity \(\epsilon ^k_{{\mathcal {I}}}(\mu )\) can be computed using, e.g., the technique in [51]. The output error estimators in (33) and (34) do not include the sum of the residuals over time instances, and are expected to be much tighter than the rigorous output error bound. In the numerical results in [29] for a linear system, it is shown that the error estimator yields a more accurate estimation of the true error than the error bound in [19, 27].
Remark 9
For multipleinput multipleoutput systems, the corresponding output error estimator can be obtained using the matrixmax norm as explained in (23) and (24).
The error estimators in (33) and (34) do not quadratically decay w.r.t. the two residuals because of the second part in \(\Phi ^k(\mu )\). In [31], we use a corrected output of the ROM, so that the finally derived error estimator includes much less contribution of the second part in \(\Phi ^k(\mu )\). This makes the error estimator decay almost quadratic.
Define a corrected output \({\bar{{\textbf{y}}}}^k(\mu )={\hat{{\textbf{y}}}}^k(\mu )+({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu ))^T{\textbf{r}}^k(\mu )\) for the ROM in (9). With the same assumptions as in Theorem 5, and the Lipschitz continuity of \({\textbf{f}}({\textbf{x}}(t, \mu ), \mu )\), the output error \({{\bar{e}}}_o^k(\mu )={\textbf{y}}^k(\mu ) {\bar{{\textbf{y}}}}^k(\mu )\) can be estimated as [31]
where \({{\bar{\Phi }}}^k(\mu )=\rho \Vert {\textbf{A}}_t(\mu )^{1}\Vert \Vert {\textbf{r}}_{du}^k(\mu )\Vert +1\rho \Vert {\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu )\Vert \).
Comparing the error estimator in (35) with that in (33), we find that the second nonquadratic term is still there, but with a scaling factor \(1\rho \) instead of \(\rho \). It is analyzed in [30] that under certain assumptions, when the PODgreedy algorithm used to compute the projection matrix \({\textbf{V}}\) converges, \(\rho \) gets closer to 1, meaning that \(1\rho \) will be closer to 0. This makes the second part in (35) tend to zero, while the second part in (28) remains away from zero. Therefore, the error estimator for the corrected output error should give a tighter estimation. The derivation follows almost that in [31] and the proof for Theorem 5, noticing that the dual system and the corrected output are slightly different from those in [31]. We will not repeat it here.
With simple calculations, the corrected output error for the hyperreduced ROM in (11) can be estimated as [31]
Error estimators for linear steady systems
Some error estimators [33, 34, 38, 54] for linear steady systems were proposed in order to avoid estimating/computing the spectral norm \(\Vert {\textbf{M}}(\mu )^{1}\Vert \) involved in the error bounds in “Error bounds for steady systems” section. An approach based on randomized residuals is proposed in [54], where some randomized systems are defined to get error estimators for both the sate error and the output error. It is discussed in [34] and [38] that the error estimators in [54] are theoretically less accurate than the estimators proposed in [33, 38]. The error estimators in [54] more easily underestimate the true error than the estimators in [33, 38], which is also numerically demonstrated in [33, 38]. Here we review the error estimators proposed in our recent work [33, 34, 38].
State error estimators
The error estimator proposed in [38] estimates the state error for linear steady systems For the FOM in (6), the error \({\textbf{e}}(\mu ):={\textbf{x}}(\mu ){\textbf{V}} {\hat{{\textbf{x}}}}(\mu )\) of the approximate state \({\textbf{V}} {\hat{{\textbf{x}}}}(\mu )\) computed by the ROM (7) can be estimated as
where \({\hat{{\textbf{x}}}}_r(\mu )\) is the solution to the following ROM
Here, \({\textbf{M}}_r(\mu )={\textbf{V}}_r^T{\textbf{M}}(\mu ){\textbf{V}}_r, {\hat{{\textbf{r}}}}(\mu )={\textbf{V}}_r {\textbf{r}}(\mu )\) with \({\textbf{V}}_r\) being properly derived, and \({\textbf{r}}(\mu )={\textbf{b}}(\mu ){\textbf{M}}(\mu ){\textbf{V}}{\hat{{\textbf{x}}}}(\mu )\). The system (38) is the ROM of the following residual system
Remark 10
We note that a similar technique of using an approximate solution to a residual system, as an error estimator for the state error, was already proposed for the finite element method (FEM) (see [37] and the references therein). There, the approximate solution was not obtained from a ROM of the residual system.
The accuracy of the error estimator \({\textbf{V}}_r{\hat{{\textbf{x}}}}_r(\mu )\) in (37) is quantified in [38]:
Theorem 6
(Quantifying the error estimator [38]) The state error \({\textbf{e}}(\mu )\) is lower and upper bounded as
where \(\delta (\mu )={\textbf{x}}_r(\mu ){\textbf{V}}_r{\hat{{\textbf{x}}}}_r(\mu )\).
Whenever the ROM (38) of the residual system is accurate enough, \(\delta (\mu )\) will be small. However, how to further quantify the error \(\delta (\mu )\) is left open. We derive the following theorem with computable upper and lower bounds.
Theorem 7
(Quantifying the error estimator with computable upper and lower bound) The state error \({\textbf{e}}(\mu )\) is lower and upper bounded as
where \({{\bar{\delta }}}(\mu )= \Vert {\textbf{M}}(\mu )^{1}\Vert \Vert {\textbf{r}}_r(\mu )\Vert \).
Proof
The proof can be easily done. We notice that
with \({\textbf{r}}_r(\mu ):={\textbf{r}}(\mu ){\textbf{M}}(\mu ){\textbf{V}}_r{\hat{{\textbf{x}}}}_r(\mu )\). Then
\(\square \)
Note that for linear systems, the upper bound in (40) is only half the upper bound in (19).
Output error esitmators
The error estimators in [33, 34] estimate the error \({\textbf{e}}_o(\mu ):={\textbf{y}}(\mu ){\hat{{\textbf{y}}}}(\mu )\) of the output \({\hat{{\textbf{y}}}}(\mu )\) computed from the ROM (7). In [34], we derive the following primaldual based output error estimator
where \({\hat{{\textbf{x}}}}_{du}(\mu )\) is the solution to the reduced dual system
and \({\hat{{\textbf{M}}}}_{du}(\mu ) ={\textbf{V}}_{du}^T{\textbf{M}}(\mu )^T{\textbf{V}}_{du}\), \(\hat{\textbf{c}}_{du}(\mu )={\textbf{V}}_{du}^T{\textbf{c}}(\mu )^T\). The reduced dual system is a ROM of the dual system,
Remark 11
In [37] and the references therein, the FEM approximation error was estimated also using a similarly defined dual system in the function space. However, the approximate solution to the dual system is not the solution of the ROM for the dual system. The approximate dual solution is then multiplied with the residual of the FEM approximation to the original PDEs to constitute a primaldual based error estimator for the output error of the FEM approximation.
The randomized output error estimator in [54] is based on the output error estimator \(\Delta _{o_1}(\mu )\). On the one hand, it is analysed in [34] that \(\Delta _{o_1}(\mu )\) is more accurate than the randomized output error estimator; on the other hand, it is also numerically demonstrated in [34] that \(\Delta _{o_1}(\mu )\) is nevertheless less accurate than the other estimators proposed in [33, 34]. In the following, we first introduce the primaldual output error estimator in [33], which involves a dualresidual system defined as
and its ROM
where \({\hat{{\textbf{M}}}}_{r_{du}}(\mu )={\textbf{V}}_{r_{du}}^T {\textbf{M}}(\mu ){\textbf{V}}_{r_{du}}\), \({\hat{{\textbf{r}}}}_{du}(\mu )={\textbf{V}}_{r_{du}}^T{\textbf{r}}_{du}(\mu )\), with \({\textbf{V}}_{r_{du}}\) being properly computed. The dualresidual \({\textbf{r}}_{du}(\mu ):={\textbf{c}}(\mu )^T{\textbf{M}}(\mu )^T{\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}\) is the residual induced by the approximate solution \({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}\) computed from the dual ROM (41). A primaldual and dualresidual based output error estimator proposed in [33] is stated as following. For the FOM in (6), the output error \({\textbf{e}}_o(\mu )\) of the ROM (7) can be estimated as
The error estimator \(\Delta _{o_2}(\mu )\) in (43) has an additional term \(({\textbf{V}}_{r_{du}}{\hat{{\textbf{x}}}}_{r_{du}}(\mu ))^T{\textbf{r}}(\mu )\) as compared to \(\Delta _{o_1}(\mu )\). Now we discuss the accuracy of both estimators through the next Theorems.
Theorem 8
(Quantifying the output error estimator \(\Delta _{o_1}(\mu )\) [34]) The output error \({\textbf{e}}_o(\mu )\) is bounded as
where \(\delta _1(\mu ):=[{\textbf{x}}_{du}(\mu ){\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )]^T{\textbf{r}}(\mu )={\textbf{r}}_{du}(\mu )^T({\textbf{M}}(\mu ))^{1}{\textbf{r}}(\mu )\).
Proof
See [34].\(\square \)
Theorem 9
(Quantifying the output error estimator \(\Delta _{o_2}(\mu )\) [34]) The output error \({\textbf{e}}_o(\mu )\) is bounded as
where \(\delta _2(\mu ):=[{\textbf{x}}_{r_{du}}(\mu ){\textbf{V}}_{r_{du}}{\hat{{\textbf{x}}}}_{r_{du}}(\mu )]^T{\textbf{r}}(\mu )\).
Proof
See [34].\(\square \)
We can observe that
where \({\textbf{r}}_{r_{du}}(\mu ):={\textbf{r}}_{du}(\mu ){\textbf{M}}(\mu )^T{\textbf{V}}_{r_{du}}{\hat{{\textbf{x}}}}_{r_{du}}(\mu )\) is the residual induced by the reduced dualresidual system.
We can further derive upper bounds for \(\delta _1(\mu )\) and \(\delta _2(\mu )\), respectively. Actually,
Although we have no proof yet, it is expected that \(\Vert {\textbf{r}}_{r_{du}}(\mu )\Vert \le \Vert {\textbf{r}}_{du}\Vert \) in general, since \({\textbf{r}}_{r_{du}}\) is the residual induced by the ROM of the dualresidual system whose righthand side is \({\textbf{r}}_{du}\). Finally, we should have \({{\bar{\delta }}}_2(\mu ) \le {{\bar{\delta }}}_1(\mu )\), indicating that \(\Delta _{o_2}(\mu )\) should be more accurate than \(\Delta _{o_1}(\mu )\). On the other hand, we know that
Then \({{\bar{\delta }}}_2(\mu ) \le {{\bar{\delta }}}_1(\mu )\) implies that underestimation of the true error by \(\Delta _{o_2}(\mu )\) should be less than by \(\Delta _{o_1}(\mu )\).
In [34], we have further proposed another output error estimator variant \(\Delta _{o_3}(\mu )\), which has less computational complexity than \(\Delta _{o_2}(\mu )\), but has similar, or sometimes even better accuracy. It does not depend on the dual system and/or dualresidual system as \(\Delta _{o_1}(\mu )\) and \(\Delta _{o_2}(\mu )\), but depends on the primalresidual system in (39). \(\Delta _{o_3}(\mu )\) is defined as
Comparing \(\Delta _{o_3}(\mu )\) with the state error estimator \({\textbf{V}}_r{\hat{{\textbf{x}}}}_r(\mu )\) in (37), we see that there is only a difference of the output matrix \({\textbf{c}}(\mu )\). Both are derived by employing the primalresidual system (39).
Theorem 10
(Quantifying the output error estimator \(\Delta _{o_3}(\mu )\) [34]) The output error \({\textbf{e}}_o(\mu )\) is bounded as
where \(\delta _3(\mu ):={\textbf{c}}(\mu )[{\textbf{x}}_r(\mu ){\textbf{V}}_r {\hat{{\textbf{x}}}}_r (\mu )]\).
With simple calculations, an upper bound of \(\delta _3(\mu )\) can be derived as
It can be easily seen that computing \(\Delta _{o_3}(\mu )\) needs only to compute one additional ROM, i.e., the ROM of the primalresidual system (38), while computing \(\Delta _{o_2}(\mu )\) needs to compute two additional ROMs. Theoretically, the upper bound \({{\bar{\delta }}}_2(\mu )\) should decay faster than the upper bound \({{\bar{\delta }}}_3(\mu )\), implicating that \(\Delta _{o_2}(\mu )\) should be more accurate than \(\Delta _{o_3}(\mu )\). However, from our numerical simulations on several different problems [34], \(\Delta _{o_3}(\mu )\) is even more accurate than \(\Delta _{o_2}(\mu )\).
Error estimator for ROMs solved with any blackbox timeintegration solver
The error bounds and error estimators reviewed in the previous sections are all residual based. In particular, for timeevolution systems, the error bound and estimators need to compute the residual \({\textbf{r}}^k(\mu )\) at corresponding time instances \(t_k, \ k=1,\ldots , K\). It is clear that to compute \({\textbf{r}}^k(\mu )\), the temporal discretization scheme applied to the FOM must be known, so that \({\textbf{r}}^k(\mu )\) (10) can be derived by inserting the approximate solution \({\tilde{{\textbf{x}}}}^k(\mu )\) into the temporal discretization scheme, e.g., (8) and by subtracting the lefthand side of the first equation from its righthand side. Moreover, the temporal discretization scheme (8) for the ROM (3) must be the same as that for the FOM to make sure that \({\tilde{{\textbf{x}}}}^k(\mu )\) computed from the ROM (8) corresponds to the true solution \({\textbf{x}}^k(\mu )\) at the same time instance \(t_k\). These two requirements on the FOM and the ROM become limitations for the error bounds (estimators) when the FOM is simulated by a blackbox timeintegration solver and/or when the ROM is also desired to be solved using the same blackbox timeintegration solver.
In [35], we propose a new error estimator which is applicable to the situation where both the FOM and the ROM are solved by a blackbox solver. We take use of a userdefined implicit–explicit (IMEX) temporal discretization scheme to derive the new error estimator. Although potentially any IMEX scheme can be applied, we consider the firstorder IMEX scheme (8) in this survey. Note that the secondorder IMEX scheme is also used in [35].
Since the firstorder IMEX scheme (8) differs from the blackbox solver, we have a defect or a mismatch when we insert the solution snapshots \({\textbf{x}}^{k}(\mu )\) computed from the blackbox solver into the firstorder IMEX scheme.
Although the timeintegration scheme of the blackbox solver is invisible, we can use the solution snapshots \({\textbf{x}}^k(\mu ), \ k=0,\ldots ,K\), at some samples of \(\mu \) to learn the defect vector. We then use \({\textbf{d}}^{k}(\mu )\) to correct the userdefined scheme (8), such that its solution recovers the solution \({\textbf{x}}^{k}(\mu )\) computed by a blackbox solver and the temporal discretization scheme of the blackbox solver then becomes visible via the corrected timediscrete FOM as below,
It is clear that if \({\textbf{d}}^{k}(\mu )\) can be accurately learned, then not only \({\textbf{x}}_c^{k}(\mu )\) in (45) recovers \({\textbf{x}}^{k}(\mu )\), but also the FOM in (45) is equivalent to the temporal discretization scheme of the blackbox solver. The ROM of the FOM in (45) can be obtained as
where \({\hat{{\textbf{A}}}}_t(\mu ), \hat{\textbf{E}}(\mu ), \hat{\textbf{f}}(\cdot , \cdot ),{\hat{{\textbf{b}}}}(\mu ), {\hat{{\textbf{c}}}}(\mu )\) are defined as in (3) and \({\hat{{\textbf{d}}}}^{k}(\mu )={\textbf{V}}^T{\textbf{d}}^{k}(\mu )\). We make use of both the corrected FOM (45) and the corresponding ROM (46) to derive output error estimation for the output error \({\textbf{y}}^k(\mu ){\hat{{\textbf{y}}}}^k(\mu )\), where \({\textbf{y}}^k(\mu )\) and \({\hat{{\textbf{y}}}}^k(\mu )\) are the outputs of the FOM in (2) and the ROM in (3) at any time instance \(t^k\), respectively. Both systems can be solved using any blackbox solver.
Given the FOM in (2), assuming that \({\textbf{A}}_t(\mu )\) is nonsingular for all \(\mu \in {\mathcal {P}}\), the nonlinear function \({\mathbf {f(\textbf{x}}(t,\mu ),\mu )}\) is Lipschitz continuous w.r.t. \({\textbf{x}}(t, \mu )\), and the defect vector \({\textbf{d}}(\mu )\) can be accurately learned, then the output error \({\textbf{y}}^k(\mu ){\hat{{\textbf{y}}}}^k(\mu )\) of the ROM (3) can be estimated as [35]
where \({\textbf{r}}^k_c(\mu ):={\textbf{E}}(\mu ){\textbf{V}}{\hat{{\textbf{x}}}}^{k1}_c(\mu )+\delta t ({\textbf{f}}({\textbf{V}} {\hat{{\textbf{x}}}}^{k1}_c(\mu ), \mu )+{\textbf{b}}(\mu )u^k)+{\textbf{d}}^k(\mu ){\textbf{A}}_t(\mu ) {\textbf{V}}{\hat{{\textbf{x}}}}^k_c(\mu )\) is the residual induced by the \({\textbf{d}}\)corrected ROM (46), and \({\bar{{\textbf{y}}}}^k_c(\mu ):={\hat{{\textbf{y}}}}^k_c(\mu )+({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu ))^T {\textbf{r}}^k_c(\mu )\) is a corrected output of the \({\textbf{d}}\)corrected ROM in (46). \({{\bar{\Phi }}}^k(\mu )\), \({\textbf{V}}, {\textbf{V}}_{du}\), \({\hat{{\textbf{x}}}}_{du}^k(\mu )\) and \({\textbf{r}}_{du}^k(\mu )\) are defined as before.
The corrected output \({\bar{{\textbf{y}}}}^k_c(\mu ):={\hat{{\textbf{y}}}}^k_c(\mu )+({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu ))^T {\textbf{r}}^k_c(\mu )\) is defined a bit differently as in [35], where \({\bar{{\textbf{y}}}}^k_c(\mu ):={\hat{{\textbf{y}}}}^k_c(\mu )({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}^k(\mu ))^T {\textbf{r}}^k_c(\mu )\). Its corresponding dual system in [35]:
is also slightly different from that in (25). Please also refer to Remark 7. However, derivation of the error estimator is very similar as that in [35] and is not repeated here.
When hyperreduction is considered, the ROM (3) becomes
where \({{\mathcal {I}}}_{\textbf{f}} (\mu )\) approximates \({\textbf{f}}({\textbf{V}} {{\hat{{{{\textbf{x}}}}}}}, \mu )\) via hperreduction. The corresponding \({\textbf{d}}\)corrected ROM is
which is the \({\textbf{d}}\)corrected hyperreduced ROM. Error estimation for the output error of the ROM (48) is stated as
where \({\textbf{r}}^k_{c,{\mathcal {I}}}(\mu ):={\textbf{E}}(\mu ) {\textbf{V}}{\hat{{\textbf{x}}}}^{k1}_c(\mu )+\delta t ({{\mathcal {I}}}_{\textbf{f}}^k(\mu )+{\textbf{b}}(\mu )u^k)+{\textbf{d}}^k(\mu ){\textbf{A}}_t(\mu ) {\textbf{V}}{\hat{{\textbf{x}}}}^k_c(\mu )\) is the residual induced by the \({\textbf{d}}\)corrected ROM (49), and \(\epsilon ^{k}_{\mathcal {I}}(\mu )\) is the hyperreduction error defined as before.
Now we come to the problem of accurately learning \({\textbf{d}}(\mu )\), so that (47) gives an accurate error estimation for ROMs solved with blackbox solvers. In [35], we have used proper orthogonal decomposition (POD) combined with radial basis function (RBF) interpolation or with feed forward neural network (FFNN) to learn \({\textbf{d}}(\mu )\). POD is first used to project \({\textbf{d}}(\mu ) \in {\mathbb {R}}^N\) onto a lowerdimensional subspace. RBF or FFNN is then used to learn the projected short vector \({\hat{{\textbf{d}}}}(\mu ) \in {\mathbb {R}}^{n_d}, n_d\ll N\), where \({\textbf{d}}(\mu ) \approx {\textbf{V}}_d {\hat{{\textbf{d}}}}(\mu )\), \({\textbf{V}}_d \in {\mathbb {R}}^{N\times n_d}\) is computed from a twostage singular value decomposition (SVD) of the snapshot matrix \({\textbf{D}}:=[{\textbf{d}}^0(\mu _1), \ldots , {\textbf{d}}^K(\mu _1), \ldots , {\textbf{d}}^0(\mu _s), \ldots , {\textbf{d}}^K(\mu _s)]\). Each \({\textbf{d}}^i(\mu _j)\) is the defect vector evaluated at time instance \(t_i\) and parameter sample \(\mu _j\). All details can be found in [35].
Remark 12
While the new error estimator is based on our earlier proposed output error estimator in [31], the idea can be directly applied to derive a posteriori state error estimators (bounds).
Multifidelity error estimation
This section briefly reviews our recent multifidelity error estimation used for accelerating the weak greedy algorithm. Weak greedy algorithms are often used to iteratively construct the reduced basis (\({\textbf{V}}\)) for MOR of parametric steady systems. A sketch of the algorithm is given as Algorithm 1.
Some key points for the greedy algorithm to converge fast are: a properly chosen training set \(\Xi \), an efficient and fasttocompute error estimator (bound) \(\Delta (\mu )\). For some complex problems, although the cardinality of the training set \(\Xi \) is not large, computing \(\Delta (\mu )\) over \(\Xi \) at each iteration is slow. In [36], we propose the concept of multifidelity error estimation to accelerate the greedy iteration.
We start with a rough training set \(\Xi _c\) with even smaller cardinality, i.e. \(\Xi _c\le \Xi \), and try to evaluate \(\Delta (\mu )\) only over \(\Xi _c\) at each greedy iteration. At the same time, a surrogate estimator is constructed based on the already available values of \(\Delta (\mu )\) over \(\Xi _c\). This surrogate is supposed to be more cheaply computed than \(\Delta (\mu )\), so that it can be fast evaluated over a fine training set \(\Xi _f\) with much larger cardinality than \(\Xi \). Using the results of the surrogate estimator over \(\Xi _f\), we enrich \(\Xi _c\) with the parameter sample selected by the surrogate. The selected parameter sample corresponds to the largest value of the surrogate. The parameter sample that corresponds to the smallest value of \(\Delta (\mu )\) over \(\Xi _c\) is simultaneously removed from \(\Xi _c\). This way, we can always keep \(\Xi _c\) small over iterations, but \(\Xi _c\) is kept being updated to only contain those important parameter samples. In the greedy process, those samples correspond to large ROM errors and are good candidates for greedy parameter selection in the next iterations.
This process of using a surrogate estimator in the greedy algorithm was originally proposed in [55] for timeevolution nonlinear systems. In [36], we define this as bifidelity error estimation, since both the original estimator \(\Delta (\mu )\) and a surrogate estimator are used for estimating the error in the greedy process. Based on that, we further propose multifidelity error estimation which depends on the structure of the original error estimator \(\Delta (\mu )\) [36]. Taking the output error estimator \(\Delta _{o_3}(\mu )\) as an example, two projection matrices \({\textbf{V}}\) and \({\textbf{V}}_r\) should be constructed in order to compute \(\Delta _{o_3}(\mu )\). When we replace \(\Delta (\mu )\) in Algorithm 1 with \(\Delta _{o_3}(\mu )\), we need to iteratively update both \({\textbf{V}}\) and \({\textbf{V}}_r\) with snapshots by solving the FOM in (6) at two greedily selected parameter samples twice. If at a certain stage, e.g., when the estimated ROM error is smaller than a small value \(\theta <1\), but is still larger than the error tolerance tol, we stop updating \({\textbf{V}}_r\), then the FOM in (39) at one of the two selected parameter samples does not have to be solved. Consequently, we have saved runtime of solving a large FOM at the subsequent iterations. At the same time, the original \(\Delta _{o_3}(\mu )\) is degraded to a low fidelity error estimator \(\Delta _{o_3}^\ell (\mu )\). The surrogate estimator is then constructed based on this low fidelity estimator at the latter stage of the greedy process. Finally, we have employed the original estimator \(\Delta _{o_3}(\mu )\), a lowfidelity estimator \(\Delta _{o_3}^\ell (\mu )\), and their respective surrogates in the whole greedy process. We call this multifidelity error estimation. We sketch this concept in Fig. 1. It is shown in [36] that the greedy process employing multifidelity error estimation is much faster than the standard weak greedy algorithm for some largescale timedelay systems with hundreds of delays.
The error estimators presented in the previous sections have been numerically compared in the individual papers. For an overview, we list them as below. Here, the sections are those in this survey where the corresponding error estimators are reviewed.

In [29, 30], the error estimator proposed there (“Error estimators for timeevolution systems” section) is numerically compared with the error bound in [19, 27] (“Error bounds for timeevolution systems” section) for parametric timeevolution systems.

In [31], the error estimator with corrected output (“Error estimators for timeevolution systems” section) is numerically compared with the error estimator in [29, 30] for parametric timeevolution systems.

In [38], the proposed state error estimator (“Error estimators for linear steady systems” section) is compared with the state error bound (“Error bounds for steady systems” section) for parametric steady systems from computational electromagnetics.

In [33], a newly proposed output error estimator \(\Delta _{o_1}(\mu )\) (“Error estimators for linear steady systems” section) is compared with the output error bound (22) in [32] “Error bounds for steady systems” section) for parametric linear steady systems. It is also compared with an existing randomized error estimator from [54].

In [34], some more output error estimators are proposed and compared with each other; they are also compared with the output error estimator \(\Delta _{o_1}(\mu )\) proposed in [33] (“Error estimators for linear steady systems” section).

In [35], a new error estimator (“Error estimator for ROMs solved with any blackbox timeintegration solver” section) which is applicable to the situation where both the FOM and the ROM are solved by a blackbox solver, is compared with the output error estimator in [31] for parametric nonlinear timeevolution systems.

In [36], the multifidelity error estimation (“Multifidelity error estimation” section) is numerically compared with the standard greedy process with only a single highfidelity error estimator for timedelay systems with more than one hundred delays.
Infsupconstantfree error estimator for timeevolution systems
While the error estimators for timeevolution systems described in “Error estimators for timeevolution systems” section are accurate, their computation involves the quantities \(\Phi ^k(\mu )\) and \({{\bar{\Phi }}}^k(\mu )\) for which the term \(\Vert {\textbf{A}}_t(\mu )^{1}\Vert = \dfrac{1}{\sigma _{\min }({\textbf{A}}_t(\mu ))}\) needs to be evaluated for every \(\mu \), where \(\sigma _{\min }({\textbf{A}}_t(\mu ))\) is the smallest singular value of the matrix \({\textbf{A}}_t(\mu )\). In function space, \(\sigma _{\min }({\textbf{A}}_t(\mu ))\) corresponds to the infsupconstant of a linear operator [56]. This poses two challenges. Firstly, the complexity of computing the smallest singular value is at least linearly dependent on N for each parameter sample. When the number of parameter samples is high (typical for problems with several parameters or parameters having a wide range), this can lead to significant increase of the offline computational cost. Secondly, for some applications the matrix \({\textbf{A}}_t(\mu )\) could be poorly conditioned, leading to \(\sigma _{\min }({\textbf{A}}_t(\mu ))\) close to zero, which could lead to blow up of the estimated error. While methods exist in the literature [56,57,58] to address the increased computational cost, these approaches are somewhat heuristic and a careful tuning of the involved parameters needs to be done to achieve good results. In the following theorem, we derive a new output error estimator applicable to timeevolution systems avoiding the infsup constant.
Remark 13
For the sake of exposition, we derive the new infsupconstantfree error estimator based on the derivation of the output error estimator in Theorem 5. But, a similar process can be repeated to derive infsupconstantfree versions of the error estimators presented in (34), (35) and (36). Furthermore, a straightforward extension of the infsupconstantfree output error estimator is applicable to the output error estimator in (47) and (50) which deals with the case of blackbox timeintegration solvers.
Theorem 11
(Primaldual infsupconstantfree output error estimator) For the time discrete FOM (8) and the timediscrete ROM (9), assume the time step \(\delta t\) is constant, so that \({\textbf{A}}_t(\mu )\) does not change with time. Let all the assumptions in Theorem 5 be met, the output error \(e_o^k(\mu )={\textbf{y}}^k(\mu ) {\hat{{\textbf{y}}}}^k(\mu )\) at the time instance \(t^k\) can be bounded as
where \({{\tilde{\rho }}}^k(\mu ):=\Vert {\tilde{{\textbf{r}}}}^k(\mu )\Vert / \Vert {\textbf{r}}^k(\mu )\Vert \) and \(\breve{\Phi }^k(\mu )=\Vert {\textbf{e}}_{du}\Vert +\Vert {\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )\Vert \) with \({\textbf{e}}_{du}:= {\textbf{A}}_t(\mu )^{1} {\textbf{r}}_{du}(\mu )\). Here, \({\hat{{\textbf{x}}}}_{du}(\mu )\) and \({\textbf{r}}_{du}(\mu )\) are defined in (26) and (27), respectively.
Proof
We start with the expression of (32) from Theorem 5 and write
Since \({\textbf{A}}_t(\mu )\) does not depend on time, we can safely remove the superscript k from \({\textbf{r}}_{du}(\mu )\). Unlike what is done in (32) we do not apply the matrix submultiplicative property in the second line for the term \(\Vert [{\textbf{A}}_t(\mu )]^{T} {\textbf{r}}_{du}^k(\mu ) \Vert \). The expression \([{\textbf{A}}_t(\mu )]^{T} {\textbf{r}}_{du}(\mu ) =: {\textbf{e}}_{du}(\mu )\) is seen to be the solution of the linear system
We call the above linear system the dualresidual system corresponding to the dual system (25). Using this dualresidual system and the expression \({\tilde{{\textbf{y}}}} (\mu )={\hat{{\textbf{y}}}}^k(\mu )+({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu ))^T{\tilde{{\textbf{r}}}}^k(\mu )\) we write
where \({\hat{{\textbf{x}}}}_{du}(\mu )\) does not change with time, so that the superscript k is also removed. Defining \(\breve{\Phi }(\mu ):= \Vert {\textbf{e}}_{du}(\mu )\Vert + \Vert {\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )\Vert \) and using \({{\tilde{\rho }}}^k(\mu )=\Vert {\tilde{{\textbf{r}}}}^k(\mu )\Vert /\Vert {\textbf{r}}^k(\mu )\Vert \), we get the desired error bound. \(\square \)
Finally, we approximate the ratio \({\tilde{\rho }}^{k}(\mu )\) with the quantity \(\rho (\mu )\) to obtain the infsupconstantfree output error estimator as
Computational aspects
In (52), evaluating \(\breve{\Phi }(\mu )\) involves determining \(\Vert {\textbf{e}}_{du}(\mu ) \Vert \) by solving the dualresidual system (51) for every parameter sample \(\mu \). This step can be computationally expensive. To address this, we propose to obtain a ROM for (51) such that we can approximate \({\textbf{e}}_{du}(\mu ) \approx {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\). The ROM reads
where \({\hat{{\textbf{A}}}}_{e}(\mu )={\textbf{V}}_{e}^T {\textbf{A}}_{t}(\mu ){\textbf{V}}_{e}\), \({\hat{{\textbf{r}}}}_{du}(\mu )={\textbf{V}}_{e}^T{\textbf{r}}_{du}(\mu )\). The dualresidual \({\textbf{r}}_{du}(\mu )\) is the residual induced by the approximate solution \({\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )\) computed from the dual ROM (26). We propose a greedy algorithm in which \({\textbf{V}}_{e}\) and the projection matrix \({\textbf{V}}_{du}\) for the dual system ROM (26) are constructed simultaneously. For an appropriately computed \({\textbf{V}}_{e}\), we have \(\Vert {\textbf{e}}_{du}(\mu ) \Vert \approx \Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\Vert \) and hence the infsupconstantfree error estimator (52) can be further approximated as
with \(\breve{\Phi }_e(\mu ):= \Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\Vert + \Vert {\textbf{V}}_{du}{\hat{{\textbf{x}}}}_{du}(\mu )\Vert \). Next, the greedy algorithm to simultaneously construct \({\textbf{V}}_{du}\) and \({\textbf{V}}_{e}\) is detailed.
Simultaneous and greedy construction of \({\textbf{V}}_{du}\) and \({\textbf{V}}_{e}\)
The greedy algorithm is sketched in Algorithm 2. The inputs to the algorithm are the system matrices corresponding to the dual system, viz., \({\textbf{A}}_{t}(\mu )^{T}, {\textbf{c}}(\mu )^{T}\), a properly chosen training set \(\Xi \) and a tolerance tol. The outputs resulting from the algorithm are the two projection bases \({\textbf{V}}_{du}\) and \({\textbf{V}}_{e}\) which are needed to evaluate \(\breve{\Phi }_e(\mu )\) in the infsupconstantfree error estimator (54). In Step 1, the initial greedy parameters \(\mu ^*\) and \(\mu ^*_{e}\) are initialized, ensuring that \(\mu ^* \ne \mu ^*_{e}\). The projection matrices \({\textbf{V}}_{du}, {\textbf{V}}_{e}^0\) and \({\textbf{V}}_{e}\) are initialized as empty matrices. In Steps 3 and 5, the FOM (25) is evaluated at \(\mu ^*\) and \(\mu ^*_{e}\), respectively. The resulting dual system snapshots are then used to update \({\textbf{V}}_{du}\) and \({\textbf{V}}_e^0\) in Steps 4 and 6, respectively, using e.g., the modified GramSchmidt process with deflation. Step 7 involves constructing the projection matrix \({\textbf{V}}_{e}\). Following this, the ROM in (53) is solved to evaluate \(\Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\Vert \) \(\forall \mu \in \Xi \), which is then used as an error estimator to choose the next greedy parameter \(\mu ^*\) in Step 8. Furthermore, in Step 9, the norm of the residual \(\Vert {\textbf{r}}_{du}(\mu )  [{\textbf{A}}_t(\mu )]^{T} {\textbf{V}}_{e} {\hat{{{\textbf{e}}}}}_{du}(\mu ) \Vert \) induced by the ROM (53) of the dualresidual system is evaluated to determine the second greedy parameter \(\mu ^*_{e}\) for the next iteration. In Step 10, the maximum estimated error at the current iteration is set to be the maximum estimated error in Step 8, i.e., \(\epsilon = \Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu ) \Vert \).
Remark 14
In Step 8, we have used the criterion \(\Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\Vert \) to select the parameter \(\mu ^*\) for constructing \({\textbf{V}}_{du}\) for the ROM (26). Recalling the state error estimator (37) for steady parametric systems proposed in (37), it is easy to see that \(\Vert {\textbf{V}}_{e} {\hat{{\textbf{e}}}}_{du}(\mu )\Vert \) is exactly the state error estimator for the state error \(\Vert {\textbf{x}}_ {du}(\mu ){\hat{{\textbf{x}}}}_{du}(\mu )\Vert \) of the ROM (26). We use this state error estimator to iteratively construct the projection matrix \({\textbf{V}}_{du}\) for the ROM (26). In order to evaluate the state error estimator, we also need to construct \({\textbf{V}}_e\). In [38], we have explained how to construct \({\textbf{V}}_e\) in detail. In particular, a different criterion is used for greedy construction of \({\textbf{V}}_e\), i.e. the norm \(\Vert {\textbf{r}}_{du}(\mu )  [{\textbf{A}}_t(\mu )]^{T} {\textbf{V}}_{e} {\hat{{{\textbf{e}}}}}_{du}(\mu ) \Vert \) to avoid \(\mu ^*=\mu ^*_e\). The vector \({\textbf{r}}_{du}(\mu )  [{\textbf{A}}_t(\mu )]^{T} {\textbf{V}}_{e} {\hat{{{\textbf{e}}}}}_{du}(\mu )\) is nothing but the residual vector induced by the ROM (53) of the dualresidual system (51).
To obtain the ROM (3) corresponding to the FOM (2), we apply the adaptive PODGreedy algorithm [31] to construct the projection matrix \({\textbf{V}}\). As the first test case (TC1), we apply the adaptive PODGreedy algorithm which uses the output error estimator presented in Theorem 5. For the second test case (TC2), we apply the adaptive PODGreedy algorithm with the new infsupconstantfree error estimator \({{\tilde{\Delta }}}_{\text {iscf}}(\mu )\) (54). To compute the projection bases for evaluating (54), we make use of Algorithm 2. We first run Algorithm 2 to obtain the projection matrices \({\textbf{V}}_{du}, {\textbf{V}}_e\), as well as the reduced quantities \({\hat{{\textbf{x}}}}_{du}(\mu )\), \({\hat{{\textbf{e}}}}_{du}(\mu )\) corresponding to \({\textbf{V}}_{du}, {\textbf{V}}_e\). During each iteration of the PODGreedy algorithm, those quantities are then used to compute the output error estimator \({{\tilde{\Delta }}}_{\text {iscf}}(\mu )\) (54).
Numerical examples
Next, we illustrate the benefits of using the infsupconstantfree error estimator \({\tilde{\Delta }}_{\text {iscf}}(\mu )\) in (54) with two numerical examples: the Burgers’ equation and the FitzHugh–Nagumo equations. It is demonstrated that, firstly, the infsupconstantfree error estimator offers accurate performance when used in the PODgreedy algorithm to construct \({\textbf{V}}\). Secondly, the new approach yields a significant reduction of the offline computational costs by avoiding solving several largescale eigenvalue problems for obtaining the infsup constant.
For the adaptive PODGreedy greedy algorithm, we plot the maximum estimated errors computed using the respective error estimators for TC1 and TC2 over the training set \(\Xi \) at every iteration. We define this as:
where \(\Delta (\mu )\) is either (33) in case of TC1 or (54) in case of TC2.
1D Burgers’ equation
The viscous Burgers’ equation defined in the 1D domain \(\Omega := [0, 1]\) is given by
with \(v:= v(z, t) \in {\mathbb {R}}\) denoting the state variable, \(z \in \Omega \) is the spatial variable and the time variable \(t \in [0, 2]\). We spatially discretize (55) with the finite difference method. The mesh size is \(\Delta z = 0.001\), which results in a discretized FOM of dimension \(N = 4000\). As the variable parameter, we consider the viscosity \(\mu \in {\mathcal {P}}:= [0.005, 1]\). The output variable of interest is the value of the state at the node just before the right boundary. The ROM tolerance is set to be tol = \(1\times 10^{4}\). We generate 100 sample points in \({\mathcal {P}}\) using np.logspace in python, out of which 80 randomly chosen samples constitute the training set.
For TC1, we first use the standard greedy Algorithm 1 to compute the projection basis \({\textbf{V}}_{du}\) for the ROM of the dual system. The error estimator used in Algorithm 1 is the state error bound (17) so that the infsup constants \(\Vert [{\textbf{A}}_t(\mu )]^{1}\Vert \) \(\forall \mu \in \Xi \) are precomputed before starting the greedy iteration. These are then used to evaluate the output error estimator (33) during the PODgreedy algorithm for constructing \({\textbf{V}}\). The greedy Algorithm 1 for constructing \({\textbf{V}}_{du}\) converges in 1.1 s. However, computing the infsup constants took 164.8 s. For solving the eigenvalue problem at every parameter, we make use of the scipy library for Python. The PODgreedy needs 255.7 s to converge, with the ROM dimension \(n = 4\).
In the case of TC2, Algorithm 2 is first used to compute the projection bases \({\textbf{V}}_{du}\) and \({\textbf{V}}_{e}\) simultaneously. This requires a total time of 0.98 s. The PODgreedy algorithm using the infsupconstantfree error estimator takes the same 255.7 s to converge. The final ROM dimension is also \(n = 4\). Convergence of the PODgreedy algorithm for ROMs generation in the case of TC1 and TC2 is plotted in Fig. 2. It is clear that using the infsupconstantfree error estimator results in little loss of accuracy in the error estimation, while speeding up the offline basis generation by 1.6\(\times \).
2D Burgers’ equation
We next consider the 2D coupled Burgers’ equation in the square domain \(\Omega := [0, 2] \times [0, 2]\). The governing equations are as follows:
We impose the following Dirichlet boundary conditions:
The initial conditions at time \(t = 0\) are given by
with \(\phi _{1} = \phi _{2} = 10\, e^{ (z_{1}  0.8)^{2}  (z_{2}  1.0)^{2}}\). In (56), \(v_{1}(z_{1}, z_{2}, t)\) and \(v_{2}(z_{1}, z_{2}, t)\) denote the state variables and represent, respectively, the velocity components in the canonical x and y directions. Further, \((z_{1}, z_{2}) \in \Omega \) and \(t \in [0, 1]\). Similar to the 1D case, we spatially discretize the 2D Burgers’ equation using the finite difference method with a step size \(\Delta z_{1} = \Delta z_{2} = 0.011\) (90 divisions along both xaxis and yaxis). This results in a coupled FOM of dimension \(N = 2\cdot 8100\). The viscosity \(\mu \in {\mathcal {P}}:= [0.01, 1]\) is the parameter of interest. As the output, we take the mean of xcomponent velocities in the region \({\widetilde{\Omega }}:= [0.7, 1.4] \times [0.7, 1.4]\). A firstorder implicit–explicit scheme with time step size \(\Delta t = 0.0025\) is used. The ROM tolerance is tol = \(1\times 10^{3}\). We generate 60 logspacesampled (with np.logspace in python) points from \({\mathcal {P}}\), out of which 48 samples are randomly chosen to constitute the training set.
For TC1, the computation of the dual system projection matrix \({\textbf{V}}_{du}\) takes 36.5 s and computing the infsupconstant by solving an eigenvalue problem for every parameter in the training set \(\Xi \) took 3, 380 seconds. Following this, Algorithm 1 is used to obtain the projection matrix \({\textbf{V}}\). It requires 5, 808 seconds to reach the desired tolerance of \(1\times 10^{3}\) in 11 iterations. The ROM dimension is \(n = 44\).
The simultaneous generation of \({\textbf{V}}_{du}\) and \({\textbf{V}}_{e}\) with Algorithm 2 needs 63 s in case of TC2. The PODGreedy algorithm using the infsupconstantfree error estimator takes 5, 811 seconds, which is close to the time taken by the greedy algorithm in case of TC1. The resulting ROM has the same dimension as before, viz., \(n = 44\). The convergence of the estimated and true errors of TC1 and TC2 are plotted in Fig. 3. Evidently, the use of the infsupconstantfree output error estimator results in no loss of the accuracy of the estimated error. The overall speedup achieved in case of TC2 is 1.6fold, compared to the offline time for TC1. However, since the system is of much larger dimension than the 1D case, the offline time of computing the infsupconstants takes much longer time: 3,380 s. This certifies that using the infsupconstantfree error estimator has saved almost one hour of offline computational time.
FitzHugh–Nagumo equations
The FitzHugh–Nagumo system models the response of an excitable neuron or cell under an external stimulus. It finds applications in a variety of fields such as cardiac electrophysiology and brain modeling. The nonlinear coupled system of two partial differential equations defined in the domain \(\Omega := [0, L]\) is given below:
with boundary conditions
and initial conditions
In the above equations, \(v_{1}(z, t)\) and \(v_{2}(z, t)\) represent the electric potential and the recovery rate of the potential, respectively. The spatial variable is denoted by \(z \in \Omega \) and the time \(t \in [0, 5]\). The nonlinear term is represented by \(f(v_{1}(z, t)):= v_{1} (v_{1}  0.1) (1  v_{1})\). The external stimulus is \(I_{\text {ext}}(t) = 50000 t^{3} e^{15t} \). The system has four free parameters \(\epsilon , c, b, \gamma \). We fix \(b=0.5\) and \(\gamma =2\) while the two free parameters are \(\mu = [\epsilon , c] \in {\mathcal {P}}:= [0.01, 0.04] \times [0.025, 0.075]\). A finite difference scheme is employed to spatially discretize (57) with 2048 nodes used for each variable leading to a FOM of dimension \(N =4096\). We sample 100 parameters uniformly from the domain \({\mathcal {P}}\) and randomly choose 70 samples from that to form the training set \(\Xi \). The temporal discretization is done on a uniform grid with \(\delta t = 0.01\). The output variables of interest are the values of the two state variables at the node next to the leftmost boundary. The greedy algorithm tolerance is set as \(\text {tol} = 1\times 10^{3}\).
As done for the previous example, we first consider TC1 for the FitzHugh–Nagumo system. In this example, the greedy algorithm needs 1.6 s to obtain \({\textbf{V}}_{du}\) while computing the infsup constants takes 174.7 s. The PODgreedy algorithm based on the error estimator in Theorem 5 converges to the desired tolerance in 8 iterations, taking 291.2 s. The resulting ROM is of dimension \(n = 48\).
Applying TC2 to this example, Algorithm 2 requires just 3.6 s to obtain \({\textbf{V}}_{du}\) and \({\textbf{V}}_{d}\). The PODgreedy algorithm converges in 8 iterations and the runtime is 292 s. The ROM dimension is again \(n = 48\). The convergence of the PODgreedy algorithm for ROM generation in the case of TC1 and TC2 is plotted in Fig. 4. Likewise, using the infsupconstantfree error estimator results in no loss of accuracy, but ends up with 1.6\(\times \) speedups. For both examples, the infsup constants, i.e., the smallest singular values of \({\textbf{A}}(\mu )\) at all the training samples of \(\mu \) are close to 1, so that the effectivity of the infsupconstantfree error estimator in TC2 has almost no difference from that in TC1. This can be seen from Figs. 24.
Conclusion
A posteriori error estimation is vital not only to quantify the accuracy of ROMs but also to construct ROMs of small dimension, in a computationally efficient and adaptive manner. In this review, we have presented a wide range of a posteriori error estimators applicable to (non)linear parametric systems, covering both steady and timedependent systems. Furthermore, we have also discussed multifidelity error estimators as a means to improve the computational efficiency of error estimation. As a novel contribution, we have introduced an infsupconstantfree output error estimator that is applicable to nonlinear timedependent systems. This new error estimator is attractive for its improved efficiency and also its ability to be applicable to systems with a potentially illconditioned left hand side matrix, e.g. \({\textbf{A}}_t(\mu )\) with smallest singular values being close to zeros. Results on three numerical examples were used to illustrate the reduced computational costs offered by the infsupconstantfree output error estimator, which is achieved with smaller effort but with little loss of accuracy. Going ahead, we envisage an important potential for accurate error estimation in applications such as digital twins where model updates can be done onthefly based on the accuracy quantified by error estimators.
Availability of data and materials
Not applicable.
References
Benner P, Gugercin S, Willcox K. A survey of projectionbased model reduction methods for parametric dynamical systems. SIAM Rev. 2015;57(4):483–531. https://doi.org/10.1137/130932715.
Daniel L, Siong OC, Chay LS, Lee KH, White J. A multiparameter momentmatching modelreduction approach for generating geometrically parameterized interconnect performance models. IEEE Trans ComputAided Design Integr Circuits Syst. 2004;23(5):678–93.
Feng L, Benner P. Chap. 6: A robust algorithm for parametric model order reduction based on implicit moment matching. In: Quarteroni A, Rozza G, editors. Reduced order methods for modeling and computational reduction, vol. 9. MS &A Series. Berlin: Springer; 2014. p. 159–86. https://doi.org/10.1007/9783319020907_6.
Baur U, Beattie CA, Benner P, Gugercin S. Interpolatory projection methods for parameterized model reduction. SIAM J Sci Comput. 2011;33(5):2489–518. https://doi.org/10.1137/090776925.
Amsallem D. Interpolation on manifolds of CFDbased fluid and finite elementbased structural reducedorder models for online aerolastic predictions. Ph.D. Thesis, Stanford University; 2010.
Geuß M, Butnaru D, Peherstorfer B, Bungartz H, Lohmann B. Parametric model order reduction by sparsegridbased interpolation on matrix manifolds for multidimensional parameter spaces. In: Proceedings of the European control conference, Strasbourg, France; 2014. p. 2727–32. https://doi.org/10.1109/ECC.2014.6862414.
Geuß M, Lohmann B. STABLE—a stability algorithm for parametric model reduction by matrix interpolation. Math Comput Model Dyn Syst. 2016;22(4):307–22. https://doi.org/10.1080/13873954.2016.1198383.
Kabir M, Khazaka R. Parametric macromodeling of highspeed modules from frequencydomain data using Loewner matrix based method. In: IEEE MTTS international microwave symposium digest; 2013. p. 1–4.
Ionita AC, Antoulas AC. Datadriven parametrized model reduction in the Loewner framework. SIAM J Sci Comput. 2014;36(3):984–1007. https://doi.org/10.1137/130914619.
Xiao YQ, GrivetTalocia S, Manfredi P, Khazaka R. A novel framework for parametric Loewner matrix interpolation. IEEE Trans Compon Packag Manuf Technol. 2019;9(12):2404–17. https://doi.org/10.1109/TCPMT.2019.2948802.
Yue Y, Feng L, Benner P. Reducedorder modelling of parametric systems via interpolation of heterogeneous surrogates. Adv Model Simul Eng Sci. 2019;6:10. https://doi.org/10.1186/s403230190134y.
BuiThanh T, Willcox K, Ghattas O. Model reduction for largescale systems with highdimensional parametric input space. SIAM J Sci Comput. 2008;30:3270–88.
Tezaur R, As’ad F, Farhat C. Robust and globally efficient reduction of parametric, highly nonlinear computational models and real time online performance. Comput Methods Appl Mech Eng. 2022;399:115392.
He W, Avery P, Farhat C. In situ adaptive reduction of nonlinear multiscale structural dynamics models. Int J Numer Methods Eng. 2020;121(22):4971–88.
Rozza G, Huynh DBP, Patera AT. Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations: application to transport and continuum mechanics. Arch Comput Methods Eng. 2008;15(3):229–75. https://doi.org/10.1007/s1183100890199.
Haasdonk B, Ohlberger M. Reduced basis method for finite volume approximations of parametrized linear evolution equations. ESAIM Math Model Numer Anal. 2008;42(2):277–302. https://doi.org/10.1051/m2an:2008001.
Quarteroni A, Manzoni A, Negri F. Reduced basis methods for partial differential equations, vol. 92. La Matematica per il 3+2. Cham: Springer; 2016. https://doi.org/10.1007/9783319154312.
Hesthaven JS, Rozza G, Stamm B. Certified reduced basis methods for parametrized partial differential equations. Springer briefs in mathematics. Cham: Springer; 2016. https://doi.org/10.1007/9783319224701.
Grepl M. Reducedbasis approximation a posteriori error estimation for parabolic partial differential equations. PhD thesis, Cambridge: Massachussetts Institute of Technology (MIT); 2005. https://dspace.mit.edu/handle/1721.1/32387.
Veroy K. Reducedbasis methods applied to problems in elasticity: analysis and applications. PhD thesis, Cambridge: Massachussetts Institute of Technology (MIT); 2003. https://dspace.mit.edu/handle/1721.1/29583.
Rovas DV. Reducedbasis output bound methods for parametrized partial differential equations. PhD thesis, Cambridge: Massachussetts Institute of Technology (MIT); 2003. https://dspace.mit.edu/handle/1721.1/16956.
Sen S. Reduced basis approximation and a posteriori error estimation for noncoercive elliptic problems: applications to acoustics. PhD thesis, Cambridge: Massachussetts Institute of Technology (MIT); 2007. http://dspace.mit.edu/handle/1721.1/7582.
Veroy K, Prud’Homme C, Rovas DV, Patera AT. A posteriori error bounds for reducedbasis approximation of parametrized noncoercive and nonlinear elliptic partial differential equations. In: 16th AIAA computational fluid dynamics conference, Orlando, United States; 2003. https://hal.archivesouvertes.fr/hal01219051.
Haasdonk B, Ohlberger M. Efficient reduced models and a posteriori error estimation for parametrized dynamical systems by offline/online decomposition. Math Comput Model Dyn Syst. 2011;17(2):145–61. https://doi.org/10.1080/13873954.2010.514703.
Machiels L, Maday Y, Patera AT. Output bounds for reducedorder approximations of elliptic partial differential equations. Comput Meth Appl Mech Eng. 2001;190(26–27):3413–26. https://doi.org/10.1016/S00457825(00)002759.
Rovas DV, Machiels L, Maday Y. Reducedbasis output bound methods for parabolic problems. IMA J Numer Anal. 2006;26(3):423–45.
Grepl MA, Patera AT. A posteriori error bounds for reducedbasis approximations of parametrized parabolic partial differential equations. ESAIM Math Model Numer Anal. 2005;39(1):157–81. https://doi.org/10.1051/m2an:2005006.
Quarteroni A, Rozza G, Manzoni A. Certified reduced basis approximation for parametrized partial differential equations and applications. J Math Ind. 2011;1:3–44. https://doi.org/10.1186/2190598313.
Zhang Y. Model order reduction for parameterized nonlinear evolution equations. Dissertation, Magdeburg: OttovonGuerickeUniversität; 2016. https://doi.org/10.25673/4434.
Zhang Y, Feng L, Li S, Benner P. An efficient output error estimation for model order reduction of parametrized evolution equations. SIAM J Sci Comput. 2015;37(6):910–36. https://doi.org/10.1137/140998603.
Chellappa S, Feng L, Benner P. Adaptive basis construction and improved error estimation for parametric nonlinear dynamical systems. Int J Numer Methods Eng. 2020;121(23):5320–49. https://doi.org/10.1002/nme.6462.
Feng L, Antoulas AC, Benner P. Some a posteriori error bounds for reduced order modelling of (non)parametrized linear systems. ESAIM M2AN. 2017;51(6):2127–58. https://doi.org/10.1051/m2an/2017014.
Feng L, Benner P. A new error estimator for reducedorder modeling of linear parametric systems. IEEE Trans Microw Theory Tech. 2019;67(12):4848–59. https://doi.org/10.1109/TMTT.2019.2948858.
Feng L, Benner P. On error estimation for reducedorder modeling of linear nonparametric and parametric systems. ESAIM Math Model Numer Anal. 2021;55(2):561–94. https://doi.org/10.1051/m2an/2021001.
Chellappa S, Feng L, Benner P. Accurate error estimation for model reduction of nonlinear dynamical systems via dataenhanced error closure. Comput Methods Appl Mech Eng. 2024;420:116712. https://doi.org/10.1016/j.cma.2023.116712.
Feng L, Lombardi L, Antonini G, Benner P. Multifidelity error estimation accelerates greedy model reduction of complex dynamical systems. Int J Numer Methods Eng. 2023;124(23):5312–33. https://doi.org/10.1002/nme.7348.
Chamoin L, Legoll F. An introductory review on a posteriori error estimation in finite element computations. SIAM Rev. 2023;65(4):963–1028.
Chellappa S, Feng L, Rubia V, Benner P. Infsupconstantfree state error estimator for model order reduction of parametric systems in electromagnetics. IEEE Trans Microw Theory Tech. 2023;71(11):4762–77. https://doi.org/10.1109/TMTT.2023.3288642.
Bonin T, Faßbender H, Soppa A, Zaeh M. A fully adaptive rational global Arnoldi method for the modelorder reduction of secondorder MIMO systems with proportional damping. Math Comput Simul. 2016;122:1–19. https://doi.org/10.1016/j.matcom.2015.08.017.
Grunert D, Fehr J, Haasdonk B. Wellscaled, aposteriori error estimation for model order reduction of large secondorder mechanical systems. ZAMM J Appl Math Mech / Zeitschrift für Angewandte Mathematik und Mechanik. 2019;100(8):201900186. https://doi.org/10.1002/zamm.201900186.
Chinesta F, Ammar A, Cueto E. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Arch Comput Methods Eng. 2010;17(4):327–50.
Ladevèze P, Chamoin L. On the verification of model reduction methods based on the proper generalized decomposition. Comput Methods Appl Mech Eng. 2011;200(23–24):2032–47.
Alfaro I, González D, Zlotnik S, Díez P, Cueto E, Chinesta F. An error estimator for realtime simulators based on model order reduction. Adv Model Simul Eng Sci. 2015. https://doi.org/10.1186/s4032301500508.
Chamoin L, Pled F, Allier PE, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Comput Methods Appl Mech Eng. 2017;327:118–46.
Gubisch M, Neitzel I, Volkwein S. Aposteriori error estimation of discrete POD models for PDEconstrained optimal control. In: Benner P, Ohlberger M, Patera A, Rozza G, Urban K, editors. Model reduction of parametrized systems. Cham: Springer; 2017. p. 213–34.
Dogančić B, Jokić M. Discretization and model reduction error estimation of interconnected dynamical systems. IFAC PapersOnLine. 2022;55(4):177–82. https://doi.org/10.1016/j.ifacol.2022.06.029.
Binder A, Jadhav O, Mehrmann V. Error analysis of a model order reduction framework for financial risk analysis. Electron Trans Numer Anal. 2022;55:469–507.
Ascher UM, Ruuth SJ, Wetton BTR. Implicit–explicit methods for timedependent partial differential equations. SIAM J Numer Anal. 1995;32(3):797–823. https://doi.org/10.1137/0732037.
Barrault M, Maday Y, Nguyen NC, Patera AT. An ‘empirical interpolation’ method: application to efficient reducedbasis discretization of partial differential equations. CR Acad Sci Paris. 2004;339(9):667–72. https://doi.org/10.1016/j.crma.2004.08.006.
Chaturantabut S, Sorensen DC. Nonlinear model reduction via discrete empirical interpolation. SIAM J Sci Comput. 2010;32(5):2737–64. https://doi.org/10.1137/090766498.
Drohmann M, Haasdonk B, Ohlberger M. Reduced basis approximation for nonlinear parametrized evolution equations based on empirical operator interpolation. SIAM J Sci Comput. 2012;34(2):937–69. https://doi.org/10.1137/10081157X.
Zhang Y, Feng L, Li S, Benner P. Accelerating PDE constrained optimization by the reduced basis method: application to batch chromatography. Int J Numer Methods Eng. 2015;104(11):983–1007. https://doi.org/10.1002/nme.4950.
Schmidt A, Wittwar D, Haasdonk B. Rigorous and effective aposteriori error bounds for nonlinear problems—application to RB methods. Adv Comput Math. 2020;46(2):1–30. https://doi.org/10.1007/s1044402009741x.
Smetana K, Zahm O, Patera AT. Randomized residualbased error estimators for parametrized equations. SIAM J Sci Comput. 2019;41(2):900–26. https://doi.org/10.1137/18M120364X.
Chellappa S, Feng L, Benner P. An adaptive sampling approach for the reduced basis method. In: Realization and model reduction of dynamical systems—a Festschrift in honor of the 70th birthday of Thanos Antoulas. Cham: Springer; 2022. p. 137–155. https://doi.org/10.1007/9783030951573_8.
Huynh DBP, Rozza G, Sen S, Patera AT. A successive constraint linear optimization method for lower bounds of parametric coercivity and infsup lower bounds. CR Acad Sci Paris. 2007;345(8):473–8. https://doi.org/10.1016/j.crma.2007.09.019.
Manzoni A, Negri F. Heuristic strategies for the approximation of stability factors in quadratically nonlinear parametrized PDEs. Adv Comput Math. 2015;41(5):1255–88. https://doi.org/10.1007/s1044401594134.
Patera AT, Rozza G. Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations. Technical report; 2007. MIT Pappalardo graduate monographs in mechanical engineering
Acknowledgements
Not applicable.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
LF has written the paper till “Multifidelity error estimation” section and modified “Infsupconstantfree error estimator for timeevolution systems” and “Conclusion” sections; SC has written “Infsupconstantfree error estimator for timeevolution systems” and “Conclusion” sections and generated the numerical results; PB has given detailed comments all through the paper, including important comments on the theorems.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Code availability
The code will be made open access once the work is accepted for publication.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proof of Theorem 2
Appendix: Proof of Theorem 2
Proof
To simplify the proof, we first define an auxiliary dual solution \({\textbf{x}}_{du, L}^k(\mu ):={\textbf{x}}_{du}^{KL+k1}(\mu ), \forall L < K\) which is a timeshift of \({\textbf{x}}_{du}^k(\mu )\), then it is easy to see that \({\textbf{x}}_{du, L}^k(\mu )\) satisfies the following dual system:
The final condition of \({\textbf{x}}_{du, L}^k(\mu )\) is at time \(t_{L+1}\), and coincides with the final condition of \({\textbf{x}}_{du}^{K}(\mu )\) at time \(t_{K}\).
Multiplying \({\textbf{e}}^{j}(\mu ):=({\textbf{x}}^j(\mu ){\tilde{{\textbf{x}}}}^j(\mu ))\) from the left on both sides of the first equation in (58), and recalling \({\textbf{A}}_t(\mu )={\textbf{E}}(\mu )\delta t {\textbf{A}}(\mu )\), we obtain
Then we have
Summing the above equation from \(j=0\) to \(j=L\), we get
Reformulating the first sum and using the assumption \({\textbf{e}}^0(\mu )=0\) lead to
From the second equation in (58), we see that \({\textbf{E}}(\mu )^T{\textbf{x}}_{du, L}^{L+1}(\mu )\) is actually the final condition at time \(t_{L+1}\). Therefore, \({\textbf{E}}(\mu )^T{\textbf{x}}_{du, L}^{L+1}(\mu )={\textbf{c}}(\mu )^T\). Replacing \({\textbf{E}}(\mu )^T{\textbf{x}}_{du, L}^{L+1}(\mu )\) in the above equation with \({\textbf{c}}(\mu )^T\), we get
Now from the primal system (8) and its ROM (9), we can get the error system,
where we have used \({\textbf{A}}_t(\mu )={\textbf{E}}(\mu )  \delta t {\textbf{A}}(\mu )\) and \({\textbf{r}}^{j+1}(\mu )={\textbf{E}}(\mu ){\tilde{{\textbf{x}}}}^{j}(\mu )+\delta t {\textbf{b}}(\mu )u^{j+1}{\textbf{A}}_t(\mu ){\tilde{{\textbf{x}}}}^{j+1}(\mu )\) for linear systems. Multiplying \([{\textbf{x}}_{du,L}^{j+1}(\mu )]^T\) from the left on both sides of the above equation, we get,
Summing up both sides of the above equation from \(j=0\) to \(j=L1\), we obtain
Comparing (59) with (60), we derive
Recall \({\textbf{x}}_{du,L}^{j}(\mu )={\textbf{x}}_{du}^{KL+j1}(\mu )\), then
Taking \(L=k\) results in
From the definition of the corrected output \({\hat{{\textbf{y}}}}_c^k(\mu )\), we get
Combing (61) with (62) leads to
where \({\textbf{e}}_{du}^{Kk+j1}(\mu )={\textbf{x}}_{du}^{Kk+j1}(\mu ){\tilde{{\textbf{x}}}}_{du}^{Kk+j1}(\mu )\) is the dual error induced by the ROM (16) of the dual system. Employing the CauchSchwarz inequality results in
Now we look for an upper bound of \(\left( \sum \limits _{j=1}^{k}\Vert {\textbf{e}}_{du}^{Kk+j1}(\mu )\Vert ^2 \right) ^{1/2}\), that is, an upper bound of \(\left( \sum \limits _{j=0}^{k1}\Vert {\textbf{e}}_{du}^{Kk+j}(\mu )\Vert ^2 \right) ^{1/2}\). From the dual system (15) and its ROM (16), we get the dualerror system,
where \({\textbf{r}}^j_{du}(\mu ):={\textbf{E}}(\mu )^T{\tilde{{\textbf{x}}}}_{du}^{j+1}(\mu ){\textbf{A}}_t(\mu )^T \tilde{\textbf{x}}_{du}^{j}(\mu )\), and \({\tilde{{\textbf{x}}}}_{du}(\mu )={\textbf{V}}_{du}{\hat{{\textbf{x}}}}^j_{du}(\mu )\). Multiplying \([{\textbf{e}}_{du}^{j}(\mu )]^T\) from the left on both sides of the above error equation gives
Here, since \({\textbf{E}}(\mu )\) is symmetric positive definite, \({\textbf{v}}^T{\textbf{E}}(\mu )^T{\textbf{w}}\) defines an inner product \(({\textbf{v}}, {\textbf{w}})_{\textbf{E}}, \forall {\textbf{v}}, {\textbf{w}}\in {\mathbb {R}}^N\), so that the first inequality is derived from the CauchySchwarz inequality. The second inequality has used \(2ab\le a^2+b^2 (a, b \in {\mathbb {R}})\). We further apply the inequality \(2ab\le \frac{1}{\gamma ^2} a^2+\gamma ^2 b^2 (\gamma \in {\mathbb {R}}_+)\) to the third term \(\Vert {\textbf{e}}_{du}^{j}(\mu )\Vert \Vert {\textbf{r}}_{du}^j(\mu )\Vert \) on the righthand side of the above inequality with \(\gamma =(\delta t \lambda _{min}^{\textbf{A}}(\mu ))^{1/2}\) to obtain
where \(\lambda _{\min }^{\textbf{A}}(\mu )>0\) is the smallest eigenvalue of \({\textbf{A}}(\mu )\). Since \({\textbf{A}}(\mu )\) is symmetric positive definite, we have that \([{\textbf{e}}_{du}^{j}(\mu )]^T{\textbf{A}}(\mu )^T {\textbf{e}}_{du}^{j}(\mu ) \ge \lambda _{\min }^{\textbf{A}}(\mu ) \Vert {\textbf{e}}_{du}^{j}(\mu )\Vert ^2\), then the above inequality gives
Summing up both sides of the above inequality from \(j=Kk\) to \(K1\) results in
The above inequality is further reformulated into
With the assumption that \({\textbf{E}}(\mu )\) is symmetric positive definite, we can safely remove \([{\textbf{e}}_{du}^{Kk}(\mu )]^T{\textbf{E}}(\mu )^T {\textbf{e}}_{du}^{Kk}(\mu ) \ge 0\) from the above inequality and obtain
Equivalently,
From the final condition of the dual system (15) and its ROM (16), we obtain
where \({\textbf{r}}_{du}^{K}(\mu )={\textbf{c}}^T(\mu ){\textbf{E}}(\mu )^T{\tilde{{\textbf{x}}}}_{du}^K(\mu )\), so that
Then
Inserting this inequality into (64), and noticing that \({{\bar{\sigma }}}(\mu ):=\sigma _{\max }{\textbf{E}}(\mu )/\sigma _{\min }{\textbf{E}}(\mu )=\Vert {\textbf{E}}(\mu )^T\Vert \Vert {\textbf{E}}(\mu )^{T}\Vert \), we get
where \(\sigma _{\max }{\textbf{E}}(\mu )\) and \(\sigma _{\min }{\textbf{E}}(\mu )\) are the largest and smallest singular values of \({\textbf{E}}(\mu )\), respectively. Combining (65) with (63) results in
where \(\Gamma (\mu ):=\frac{1}{\delta t \lambda _{\min }^{\textbf{A}}(\mu )}\) and \(\delta _{du}^K(\mu ):=\frac{{{\bar{\sigma }}}(\mu )}{\Gamma (\mu )} \Vert {\textbf{r}}_{du}^{K}(\mu )\Vert \). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Feng, L., Chellappa, S. & Benner, P. A posteriori error estimation for model order reduction of parametric systems. Adv. Model. and Simul. in Eng. Sci. 11, 5 (2024). https://doi.org/10.1186/s40323024002608
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40323024002608