A posteriori error estimation for model order reduction of parametric systems

This survey discusses a posteriori error estimation for model order reduction of parametric systems, including linear and nonlinear, time-dependent and steady systems. We focus on introducing the error estimators we have proposed in the past few years and comparing them with the most related error estimators from the literature. For a clearer comparison, we have translated some existing error bounds proposed in function spaces into the vector space C n and provide the corresponding proofs in C n . Some new insights into our proposed error estimators are explored. Moreover, we review our newly proposed error estimator for nonlinear time-evolution systems, which is applicable to reduced-order models solved by arbitrary time-integration solvers. Our recent work on multi-ﬁdelity error estimation is also brieﬂy discussed. Finally, we derive a new inf-sup -constant-free output error estimator for nonlinear time-evolution systems. Numerical results for three examples show the robustness of the new error estimator.


Introduction
For every model order reduction (MOR) method or algorithm to be eventually used in real applications, accuracy and efficiency of the method play key roles.While many MOR methods are numerically shown efficient, not all of them are guaranteed to be reliable.In other words, not all numerically demonstrated efficient MOR methods are associated with computable error estimators, let alone fast-to-compute error estimators.This work reviews a posteriori error estimators for projection-based MOR of parametric systems.Many projection-based MOR methods for parametric systems [1] have been proposed, for example, the multi-moment-matching methods [2,3], methods based on (transfer function, projection matrix, or manifold) interpolation [4][5][6][7][8][9][10][11], the proper orthogonal decomposition (POD) methods [12][13][14], as well as the reduced basis methods [15][16][17][18].We name those MOR methods for parametric systems pMOR methods.However, error estimation for some of the pMOR methods are not yet widely discussed, for example, error estimation for interpolation-based pMOR methods.While some a posteriori error bounds [15,[17][18][19][20][21][22][23][24][25][26][27][28] are proposed for reduced-order models (ROMs) obtained from the reduced basis method, most of them are derived using the weak form of the finite element method (FEM).In contrast, we proposed some a posteriori error estimators [29][30][31][32][33][34][35] which are independent of the numerical discretization method.The error estimators are expressed with the already discretized matrices and (nonlinear) vectors.Many of the existing error bounds or error estimators are applicable to ROMs constructed via global projection matrices, regardless which pMOR method is used for the ROM construction.For the reduced basis method, the projection matrix and the ROM are usually constructed via a greedy process.Multi-fidelity error estimation is recently proposed in [36] to accelerate the greedy algorithm for constructing the projection matrix.
We further discuss a newly proposed error estimator [35] which is independent of the numerical time-integration scheme and therefore is able to estimate the error of the ROM solved with any time integrator.This is desired in many engineering applications, where often commercial software is used to solve the original dynamical systems.Then it is also desirable that the error estimator can be applied to measure the ROM error while the ROM is solved with the same software.However, existing error estimators (bounds) cannot achieve this, since they usually require a pre-defined non-adaptive time-integration scheme.This limits the wide use of the error estimators (bounds).
Finally, we propose an inf-sup-constant-free output error estimator for nonlinear timeevolution systems, which avoids the computation of the smallest singular value σ min (μ) of a large matrix at each queried sample of the parameter.This not only improves the accuracy of the error estimator for problems with σ min (μ) close to zero, but also reduces a large amount of computations, as computing the singular value needs computational complexity of at least O(N ) for each parameter sample, where N is usually large.
Most of the error estimation methods reviewed in this work are based either on the residual of the ROM approximation or on both the residual and a dual system.Such techniques of using the residual of an approximate solution and a dual system, can be traced back to error estimation for FEM approximations, see, e.g., [37].
For clarity, we summarize the new contributions of this survey, which cannot be found in the referenced articles: • Theorem 2. It transforms the error bound presented in function space in [19,27] into an error bound in the vector space C n .New proofs are provided in Appendix.• Theorem 4. It derives an error bound with quadratic decay in C n .• Theorem 5 and its proof.It uses a slightly different dual system (25) and a slightly different auxiliary output ỹk (μ) to derive the same output error bound as in [29,30].Please see Remark 8 for the detailed differences.• Theorem 7. It quantifies the state error estimator proposed in [38] with computable upper and lower bounds.• "Inf-sup-constant-free error estimator for time-evolution systems" section.It proposes a new inf-sup-constant-free output error estimator for parametric timeevolution systems.
In the next sections, we discuss error estimation for both time-evolution systems and steady systems."Problem formulation" section formulates the problems considered in this work, including the original large-scale models and the corresponding ROMs.We first review rigorous error bounds for both systems and provide some new proofs in "Rigorous a posteriori error bounds" section.Then in "A posteriori error estimators" section, we review error estimators which are not rigorous anymore, but decay faster than the error bounds.The error estimators usually also have less computational complexity than the error bounds."Error estimator for ROMs solved with any black-box time-integration solver" section reviews the newly proposed error estimator that is applicable to black-box solvers.The recently proposed multi-fidelity error estimation for large and complex systems is reviewed in "Multi-fidelity error estimation" section.It is shown that for some complex systems, the greedy process of constructing the reduced basis can be largely accelerated with multi-fidelity error estimation."Inf-sup-constant-free error estimator for time-evolution systems" section proposes a new inf-sup-constant-free output error estimator for nonlinear time-evolution systems and presents numerical results.We conclude this survey in "Conclusion" section.This review is not exhaustive, but only contains our contribution to this topic and the most related ones from the literature.Other error estimators, in particular all error estimators for different types of systems, e.g., error estimation for ROMs of second-order non-parametric systems [39,40], are not discussed.The proper generalized decomposition (PGD) method [41] known as a non-projectionbased MOR method, and the corresponding error estimation [42][43][44], are not considered in this survey either.The list of abbreviations is provided as below:

Problem formulation
Consider the following parametric time-evolution system of differential algebraic equations (DAEs): where t ∈ [0, T ] and μ ∈ P ⊂ R p , P is the parameter domain.x(μ) ∈ R N is the state vector of the system and , ∀μ ∈ P, are the system matrices, f : R N × P → R N is the nonlinear system operator and u : t → R n I is the external input signal.Such systems often arise from discretizing partial differential equations (PDEs) using numerical discretization schemes, or following some physical laws.System (1) is called the full-order model (FOM) when we discuss MOR.The number of equations N in ( 1) is often very large to ensure high-resolution of the underlying physical process.Numerically solving the FOM is expensive, especially for multi-query tasks, where the FOM has to be solved at many instances of μ.When n I > 1 and n O > 1, the system has multiple inputs and multiple outputs.Such problems are common in electrical or electromagnetic simulation [36].When we consider error estimation, we usually first assume n I = n o = 1, then the obtained error estimation is extended to the more general case n I > 1 and n O > 1.The extension is straightforward if the error is measured using the matrix-max norm [31,33,38].Therefore, if not mentioned explicitly, we consider the case n I = n o = 1 such that (1) can be written as ( Here, the input signal u(t) and the output response y(t, μ) become scalar-valued functions of time and μ, respectively.Consequently, the system matrices b ∈ R N and c ∈ R N are now vectors.All other quantities remain the same as in (1).We will briefly mention the extension to n I > 1 and n O > 1 at proper places.Projection-based MOR techniques obtain a ROM for (2) in the following form: where V ∈ R N ×n is the parameter μ-independent projection matrix, whose columns are the reduced basis vectors.
•) is the reduced nonlinear vector.The number of equations n in (3) should be much smaller than N in (2), i.e., n N , so that the ROM can be readily used for repeated simulations.When V = W, it is referred to as Galerkin projection.We focus on Galerkin projection, though the error estimators discussed in this work straightforwardly apply to Petrov-Galerkin projection, too.
For steady problems, the parametric system is time-independent, where x(μ) ∈ R N , and f : R N × C p → R N is the nonlinear system operator.Projectionbased pMOR obtains a steady parametric ROM as below, where f (•, When the system is linear, the steady system then becomes where M(μ) ∈ R N ×N , ∀μ ∈ P. The corresponding steady parametric ROM is where M(μ) = V T M(μ)V.
In the following, we mainly discuss error estimation on the solutions obtained from the ROMs (3) and (7).The norm • refers to the vector 2-norm or matrix spectral norm all through the article.The ROM in (3) or ( 7) is constructed using a global reduced basis V.The error estimation methods reviewed in this work could be applied to measure the error of the ROM obtained using a global reduced basis, irrespective of the method used to construct the reduced basis.In this sense, the error estimation methods are generic and could be applicable to multi-moment-matching methods, POD methods, reduced basis methods and some interpolation-based methods.

Remark 1
We point out that if the FOMs (1), ( 2), ( 4), (6) are obtained from numerical discretization of PDEs, the error estimation discussed in this work and those in most of the referenced works in the introduction, do not involve the discretization error.This is the case for most of the reduced basis method in the literature.As the spatial discretization and the model reduction are mostly two separate steps, this is common practice.We note that in case of knowledge of the discretization error, e.g., in adaptive FEM, one can adapt the model reduction error tolerance to this error so that model reduction does not contribute further to the magnitude of the approximation error, by, e.g., choosing the model reduction tolerance to be 1-2 orders of magnitude lower than the discretization error.This is common practice, but beyond the scope of this paper.For works on error estimation including both the discretization error and the ROM error, please refer to [45][46][47].The error estimation reviewed in this work could be combined with the discretization error estimator [37] to realize adaptivity of the mesh size by checking the two estimated errors respectively, during a joint greedy process for both spatial discretization and MOR.Moreover, there are FOMs that are not derived by numerical discretization of PDEs, rather from some physical laws, for example, the modified nodal analysis (MNA) in circuit simulation directly results in systems of DAEs.For such systems, we consider the solutions to the FOMs as the exact solutions.

Rigorous a posteriori error bounds
This section reviews rigorous a posteriori error bounds for estimating the ROM error, which are upper bounds of the true errors and therefore are rigorous.For time-evolution systems, most of the approaches estimate the error at discrete time instances.There are error bounds for state error and error bounds for output error.Output error bounds usually need a dual system to achieve faster decay.We review the error bound for timeevolution systems and steady systems in separate subsections.

Error bounds for time-evolution systems
The standard error estimation approaches proposed for the reduced basis method are residual-based [17-19, 23, 24].In order to derive the error bound, knowledge of the temporal discretization scheme used to integrate the FOM and the ROM is assumed, e.g., using implicit Euler, Crank-Nicolson method, or an implicit-explicit (IMEX) method.Computing the error bound involves determining the residual vector r(μ) ∈ R N at each time instance.Some goal-oriented output error estimation approaches also require the residual of a dual system.Suppose (2) is discretized in time using a first-order IMEX scheme [48].The linear part is discretized implicitly, while the nonlinear vector f (x(t, μ), μ) is evaluated explicitly.The resulting discretized system is with A t (μ) := E(μ)−δtA(μ).Here, δt is the temporal discretization step.Error estimation methods discussed in this work may also apply to time-varying δt.For simplicity, we use δt to represent the time-varying case, too.The ROM (3) can be discretized in the same way as where Ât (μ) := Ê(μ) − δt Â(μ).The residual from the ROM approximation is computed by substituting the approximate state vector xk (μ) := V xk (μ) into (8).The resulting residual at the k-th time step, r k (μ) is The nonlinear part of the ROM (9) is not yet hyperreduced.When hyperreduction [49,50], e.g., discrete empirical interpolation (DEIM), is applied to (9), we get the ROM in the form, where It is clear that in order to obtain the residual r k (μ), the temporal discretization scheme for the ROM should be the same as that for the FOM so that xk (μ) in (10) and x k (μ) in (8) correspond to the same time instance t k .

State error bound
An a posteriori error bound (μ) for the approximation error e k (μ) := x k (μ) − xk (μ) can be computed based on the residual as below.
Theorem 1 (Residual-based error bound) Suppose that the nonlinear quantity f (x(t, μ), μ) is Lipschitz continuous in the first argument for all μ such that there exists a constant L f for which Further assume that for any parameter μ the projection error at the first time step is e 0 (μ) = x 0 (μ) − x0 (μ) = x 0 (μ) − VV T x 0 (μ) , and A t (μ) is invertible, ∀μ ∈ P. The error of the approximate state vector x at the k-th time step, e k (μ) = x k (μ) − xk (μ) is given by: where ζ (μ) := A t (μ) −1 and ξ (μ Proof A proof for the above theorem can be found in [35]. Remark 2 When the system has multiple inputs, then the state error bound corresponding to each column of B(μ) can be obtained from Theorem 1.The final state error is taken as the maximum over all the column-wise derived state error bounds.
In [16,18,19,51], similar state error bounds using the residual r k (μ) are derived, where only linear systems are considered in [16,18].The error bound proposed in [19] for nonlinear systems includes the error of hyperreduction for the nonlinear function f (x(t, μ), μ), such that there is an additional term in the error bound.The error bound [19] is expressed in function space, and is not straightforward to be translated into the vector space C n as we consider here.An error bound in the vector space C n by considering hyperreduction is provided in [29,52].A state error bound based on implicit temporal discretization scheme is proposed in [51], where the hyperreduction error is also considered.In summary, all the error bounds discussed in [16,18,19,29,51,52] and the one in Theorem 1 involve summing up the residual r k (μ) at discrete time steps.
Remark 3 In [24], state error bound for the linear version of the time-continuous ROM in (3) is derived, i.e., the nonlinear functions in (2) and in (3) are both assumed to be zero.The error bound is also a function of continuous time and continuous parameter.The sum of the residual r i (μ) over discrete time steps becomes the integral of the residual over the time interval [0, T ].

Output error bound
A straightforward output error bound for the output error can be derived from (12) of Theorem 1 by noticing that e k o (μ) ≤ c e k (μ) [24].Finally, we have The above output error bound is nevertheless rather conservative, especially when c is large.Moreover, the error bound depends only on r i (μ) , i.e., the primal residual, leading to a linear decay.Primal-dual-based output error bounds are obtained in [19,24,26,27] for linear timeevolution systems, so that the resulting error bounds possess quadratic decay w.r.t.both the primal residual and the dual residual.The output error bound in [19,26,27] is described in function space based on the weak form of the original PDEs.To be consistent with the system (9) using matrices and vectors, we transform the error bound in [19] into the vector space C n for the ROM in (9).Theorem 2 shows the interpreted error bound.The assumptions of the theorem correspond to those assumptions in [19] in function space.No additional or stronger limitations are assumed in Theorem 2. Although the proof for the theorem can be done by more or less using the idea of the proof in [19], it is very different and is therefore provided in this work.Note that the proof in [19] is divided into several lemmas, thus consists of a sequence of proofs.
Theorem 2 (Output error bound for linear systems) Given a linear FOM in (8), where f (x k (μ), μ) = 0, consider the output error of its ROM in (9) where f (V xk (μ), μ) = 0. Assume that there is no error at the initial condition, e 0 (μ) = x 0 (μ) − x0 (μ) = 0, and both the matrices −A(μ) and E(μ) are symmetric positive definite.An error bound for the error of a corrected output of the ROM is where ŷk denotes the residual at the j-th time instance induced by the ROM of the dual system and (μ) is a parameter-dependent scalar.δ K du (μ) is a scaled upper bound for the dual ROM state error x K du (μ) − V du xK du (μ) 2 at the final time step t = t K .The dual ROM is defined as where Proof See Appendix.
Remark 4 For multiple input and multiple output systems, an output error bound corresponding to each column of B(μ) and each row of C(μ) can be derived from Theorem 2.
Then the final output error bound is taken as the maximum over all column-row-wise derived output error bounds.Please refer to a more detailed derivation for steady systems in the next "Error bounds for steady systems" section.
Remark 5 In [24], a similar primal-dual based output error bound is obtained for the time-continuous ROM in (3).The output error bound estimates the output error at the final time T .The sums of the primal residual r(μ) k and the dual-residual r k du over time instances in (14) then become two integrals integrating the time variable from 0 to T. The initial approximation error e 0 (μ) 2 was assumed to be zero in [19], while it exists in the error bound in [24].(μ) is also differently defined in [24].In contrast to [24] where the error estimation is derived in the vector space C n , in [26] a time-continuous output error bound is derived in the function space based on the weak form of the original PDEs.The error bounds proposed in [15,24,27] are also reviewed in the survey paper [28] on the reduced basis method.
Remark 6 Theorem 2 is restricted in the sense that both E(μ) and −A(μ) are assumed to be symmetric positive definite.Our proposed error estimators to be discussed in "A posteriori error estimators" section do not need this assumption.
The primal-dual based output error bound in ( 14) has a quadratic behavior in the sense that it is the multiplication of the primal residuals with the dual-residuals.Therefore, it is expected that the error bound decays faster than the primal-only output error bound in (13).Note that all the above reviewed error bounds estimate the error by accumulating the residuals over time.

Error bounds for steady systems
In this subsection, we discuss error bounds for steady systems as in (6).Analogous to the time-evolution systems, the error bounds for both the state error and the output error also rely on the spectral norm of M −1 (μ) , i.e., the smallest singular value of the matrix M(μ) for any given μ.

State error bound
The state error bound for the state error e(μ) = x(μ) − V x(μ) can be easily derived by noticing that Finally, Similar error bounds for the state error have been proposed for the reduced-basis method [17,18] based on the weak form of the PDEs and they are written in the functional form.Here, we derive the error bound in the vector space C n for the spatially descritzed system (6) written using matrices and vectors.For systems with multiple inputs, b(μ) is a matrix, then e(μ) is also a matrix.Considering the i-th column e i (μ) of e(μ), we get [38] where r i (μ) is the i-th column of r(μ) The final bound s (μ) is then defined as s (μ) := max i e i (μ) for multiple input systems.In [17,23,53], error bounds for the nonlinear steady systems (4) and ( 5) are also obtained, where M(μ) −1 on the right-hand side of ( 18) is replaced by the smallest singular value of a properly defined Jacobian matrix in [17], whereas in [23], it is replaced by a lower bound on the coercivity constant of a linear operator.In [53], with some assumptions, e(μ) is bounded as where ê(μ) is a properly computed approximation of e(μ) which is the solution to the residual system Here, J f (μ) is the Jacobian matrix of f in (4) w.r.t.x(μ).In (19), r r (μ) := r(μ) − J f ê(μ) is the residual induced by the approximation ê(μ) to e(μ).
In summary, the error bound s (μ) as well as the error bounds derived in [17,18,23] all depend on the spectral norm of a properly defined matrix, or a coercivity constant (usually a lower bound of it), which entail computational complexity depending on the large dimension N .Sometimes, M(μ) −1 or the coercivity constant is so small leading to very rough error estimation.

Output error bound
Analogous to the time-evolution systems, estimating the output error y(μ) − ŷ(μ) for the ROM ( 7) is also based on a dual system and its ROM defined respectively as where The following Theorem states an output error bound using the dual system (20) and its ROM (21).
The above primal-dual based output error bound is motivated by the primal-dual error bounds early proposed in [20][21][22], etc, though the derivations in [20][21][22] are in function space and therefore are different.An even earlier proposed primal-dual based output error bound in function space can be found in [25].For systems with multiple inputs and multiple outputs, an error bound with matrix-max norm can be derived [33].To this end, we first get the error bound for the (i, j)-th entry of the output e o (μ) matrix, which can be straightforwardly derived from (22), i.e., where r du,i (μ Here, xdu,i (μ) and xj (μ) are the i-th and j-th columns of xdu (μ) and x(μ), respectively.The error bound for e o (μ) max is defined as The error bounds in ( 22) and ( 23) do not quadratically decay, since there is a second term which is not a quadratic function of the two residual norms.However, with some modifications or assumptions, we can derive error bounds that are quadratic.
Theorem 4 (Output error bound for linear steady systems with quadratic behavior When b(μ) = c T (μ), if we modify the output of the ROM in (7) to ȳ(μ) = ŷ(μ) + (V du xdu ) T r(μ), then the output error bound for the output error y(μ) − ȳ(μ) becomes Proof When b(μ) = c T (μ), the dual system (20) is the same as the primal system (6), so that V du = V and xdu (μ) = x(μ).This leads to [V du xdu (μ)] T r(μ) = 0 in (22) or (23).When b(μ) = c T (μ), we have With the corrected output, the output error bound has a quadratic behavior.The same technique was previously used in [20][21][22]25] for error analysis in function space.The error bound in ( 24) is in agreement with the analysis in, e.g., [20].It is worth pointing out that using a corrected output to obtain error estimation with quadratic decay was early proposed for the finite element method (FEM) [37].Analogous to the state error bound in (18), the smallest singular value of M(μ) for any given μ must be computed in order to compute the output error bound.

A posteriori error estimators
This section discusses a posteriori error estimators that may loose the rigorousness of the error bound.However, these error estimators try to reduce the big gap between the error bounds and the true error occurring in many problems.Usually the ratio error bound true error or error estimator true error is considered as the effectivity of an error bound/estimator.A posteriori error estimators discussed in this section are aimed to have effectivities error estimator true error closer to 1 as compared to those error bound true error of the error bounds.At the same time, they usually have less computational complexity than the error bound.In the following subsections, we also separately discuss error estimation for time-evolution systems and that for steady systems.

Error estimators for time-evolution systems
The error estimators discussed in this subsection aim to estimate the error of the timediscrete ROM in (9).The works in [29][30][31] propose output error estimators which avoid accumulating (summing up) the residuals over the time evolution, resulting in much tighter error estimators than the primal-dual based error bounds in "Error bounds for time-evolution systems" section.Furthermore, the output error estimators in [29,30] apply to both nonlinear and linear systems.For nonlinear systems, the error estimators could also include the approximation error of hyperreduction.We review those error estimators in the following theorems.

Output error estimators
The output error estimators needs a dual system defined as, The ROM of the dual system can be derived by where Remark 7 The dual system defined in ( 25) is slightly different from that in [29][30][31], where the right-hand side is −c(μ) T instead of c(μ) T .To be consistent with the definition of the corrected output ȳ(μ) for the steady systems in Theorem 4, we use the dual system (25), based on which the corrected output for the nonlinear time-evolution systems to be defined later will have a uniform form as the corrected output for the steady systems.
The residual induced by the approximate solution V du xdu computed from the ROM in ( 26) is Define an auxiliary residual: It can be seen that the differences of rk (μ) from r k (μ) in (10) With rk (μ), we will derive a direct relation between rk (μ) and the state error x k (μ) − xk (μ).This relation will aid the derivation of the output error estimation.
Remark 8 Recall that the dual system defined in ( 26) is a bit different, therefore the proof in [29,30] does not directly apply here.Although the proof above is similar to the proofs in [29,30], it is not the same.In particular the variable ỹk [29,30].
When hyperreduction is applied to the ROM (9), we get the hyperreduced ROM in (11).An output error estimation for the hyperreduced ROM is derived in [29,30] as where The quantity k I (μ) can be computed using, e.g., the technique in [51].The output error estimators in (33) and (34) do not include the sum of the residuals over time instances, and are expected to be much tighter than the rigorous output error bound.In the numerical results in [29] for a linear system, it is shown that the error estimator yields a more accurate estimation of the true error than the error bound in [19,27].
Remark 9 For multiple-input multiple-output systems, the corresponding output error estimator can be obtained using the matrix-max norm as explained in (23) and (24).
The error estimators in (33) and (34) do not quadratically decay w.r.t. the two residuals because of the second part in k (μ).In [31], we use a corrected output of the ROM, so that the finally derived error estimator includes much less contribution of the second part in k (μ).This makes the error estimator decay almost quadratic.
Define a corrected output ȳk (μ) = ŷk (μ)+(V du xk du (μ)) T r k (μ) for the ROM in (9).With the same assumptions as in Theorem 5, and the Lipschitz continuity of f (x(t, μ), μ), the output error ēk o (μ) = y k (μ) − ȳk (μ) can be estimated as [31] ēk where Comparing the error estimator in (35) with that in (33), we find that the second nonquadratic term is still there, but with a scaling factor |1 − ρ| instead of ρ.It is analyzed in [30] that under certain assumptions, when the POD-greedy algorithm used to compute the projection matrix V converges, ρ gets closer to 1, meaning that |1 − ρ| will be closer to 0. This makes the second part in (35) tend to zero, while the second part in (28) remains away from zero.Therefore, the error estimator for the corrected output error should give a tighter estimation.The derivation follows almost that in [31] and the proof for Theorem 5, noticing that the dual system and the corrected output are slightly different from those in [31].We will not repeat it here.
With simple calculations, the corrected output error for the hyperreduced ROM in (11) can be estimated as [31] ēk Error estimators for linear steady systems Some error estimators [33,34,38,54] for linear steady systems were proposed in order to avoid estimating/computing the spectral norm M(μ) −1 involved in the error bounds in "Error bounds for steady systems" section.An approach based on randomized residuals is proposed in [54], where some randomized systems are defined to get error estimators for both the sate error and the output error.It is discussed in [34] and [38] that the error estimators in [54] are theoretically less accurate than the estimators proposed in [33,38].The error estimators in [54] more easily underestimate the true error than the estimators in [33,38], which is also numerically demonstrated in [33,38].Here we review the error estimators proposed in our recent work [33,34,38].

State error estimators
The error estimator proposed in [38] estimates the state error for linear steady systems For the FOM in (6), the error e(μ) := x(μ) − V x(μ) of the approximate state V x(μ) computed by the ROM (7) can be estimated as where xr (μ) is the solution to the following ROM Here, M r (μ) = V T r M(μ)V r , r(μ) = V r r(μ) with V r being properly derived, and r(μ) = b(μ) − M(μ)V x(μ).The system (38) is the ROM of the following residual system Remark 10 We note that a similar technique of using an approximate solution to a residual system, as an error estimator for the state error, was already proposed for the finite element method (FEM) (see [37] and the references therein).There, the approximate solution was not obtained from a ROM of the residual system.
The accuracy of the error estimator |V r xr (μ)| in (37) is quantified in [38]: Theorem 6 (Quantifying the error estimator [38]) The state error |e(μ)| is lower and upper bounded as Whenever the ROM (38) of the residual system is accurate enough, δ(μ) will be small.However, how to further quantify the error δ(μ) is left open.We derive the following theorem with computable upper and lower bounds.
Proof The proof can be easily done.We notice that Note that for linear systems, the upper bound in ( 40) is only half the upper bound in (19).

Output error esitmators
The error estimators in [33,34] estimate the error e o (μ) := y(μ) − ŷ(μ) of the output ŷ(μ) computed from the ROM (7).In [34], we derive the following primal-dual based output error estimator where xdu (μ) is the solution to the reduced dual system Mdu (μ)x du = ĉdu (μ), (41) and Mdu (μ T .The reduced dual system is a ROM of the dual system, Remark 11 In [37] and the references therein, the FEM approximation error was estimated also using a similarly defined dual system in the function space.However, the approximate solution to the dual system is not the solution of the ROM for the dual system.The approximate dual solution is then multiplied with the residual of the FEM approximation to the original PDEs to constitute a primal-dual based error estimator for the output error of the FEM approximation. The randomized output error estimator in [54] is based on the output error estimator o 1 (μ).On the one hand, it is analysed in [34] that o 1 (μ) is more accurate than the randomized output error estimator; on the other hand, it is also numerically demonstrated in [34] that o 1 (μ) is nevertheless less accurate than the other estimators proposed in [33,34].In the following, we first introduce the primal-dual output error estimator in [33], which involves a dual-residual system defined as M(μ) T x r du (μ) = r du (μ), and its ROM Mr du (μ)x r du (μ) = rdu (μ), where Mr du (μ) = V T r du M(μ)V r du , rdu (μ) = V T r du r du (μ), with V r du being properly computed.The dual-residual r du (μ) := c(μ) T − M(μ) T V du xdu is the residual induced by the approximate solution V du xdu computed from the dual ROM (41).A primal-dual and dual-residual based output error estimator proposed in [33] is stated as following.For the FOM in (6), the output error e o (μ) of the ROM (7) can be estimated as The error estimator o 2 (μ) in ( 43) has an additional term |(V r du xr du (μ)) T r(μ)| as compared to o 1 (μ).Now we discuss the accuracy of both estimators through the next Theorems.
We can observe that where r r du (μ) := r du (μ) − M(μ) T V r du xr du (μ) is the residual induced by the reduced dualresidual system.
We can further derive upper bounds for δ 1 (μ) and δ 2 (μ), respectively.Actually, Although we have no proof yet, it is expected that r r du (μ) ≤ r du in general, since r r du is the residual induced by the ROM of the dual-residual system whose right-hand side is r du .Finally, we should have δ2 (μ) ≤ δ1 (μ), indicating that o 2 (μ) should be more accurate than o 1 (μ).On the other hand, we know that Then δ2 (μ) ≤ δ1 (μ) implies that underestimation of the true error by o 2 (μ) should be less than by o 1 (μ).
In [34], we have further proposed another output error estimator variant o 3 (μ), which has less computational complexity than o 2 (μ), but has similar, or sometimes even better accuracy.It does not depend on the dual system and/or dual-residual system as o 1 (μ) and o 2 (μ), but depends on the primal-residual system in (39).o 3 (μ) is defined as Comparing o 3 (μ) with the state error estimator |V r xr (μ)| in (37), we see that there is only a difference of the output matrix c(μ).Both are derived by employing the primal-residual system (39).
Theorem 10 (Quantifying the output error estimator o 3 (μ) [34]) The output error e o (μ) is bounded as where With simple calculations, an upper bound of δ 3 (μ) can be derived as It can be easily seen that computing o 3 (μ) needs only to compute one additional ROM, i.e., the ROM of the primal-residual system (38), while computing o 2 (μ) needs to compute two additional ROMs.Theoretically, the upper bound δ2 (μ) should decay faster than the upper bound δ3 (μ), implicating that o 2 (μ) should be more accurate than o 3 (μ).However, from our numerical simulations on several different problems [34], o 3 (μ) is even more accurate than o 2 (μ).

Error estimator for ROMs solved with any black-box time-integration solver
The error bounds and error estimators reviewed in the previous sections are all residual based.In particular, for time-evolution systems, the error bound and estimators need to compute the residual r k (μ) at corresponding time instances t k , k = 1, . . ., K .It is clear that to compute r k (μ), the temporal discretization scheme applied to the FOM must be known, so that r k (μ) (10) can be derived by inserting the approximate solution xk (μ) into the temporal discretization scheme, e.g., (8) and by subtracting the left-hand side of the first equation from its right-hand side.Moreover, the temporal discretization scheme (8) for the ROM (3) must be the same as that for the FOM to make sure that xk (μ) computed from the ROM (8) corresponds to the true solution x k (μ) at the same time instance t k .These two requirements on the FOM and the ROM become limitations for the error bounds (estimators) when the FOM is simulated by a black-box time-integration solver and/or when the ROM is also desired to be solved using the same black-box timeintegration solver.
In [35], we propose a new error estimator which is applicable to the situation where both the FOM and the ROM are solved by a black-box solver.We take use of a userdefined implicit-explicit (IMEX) temporal discretization scheme to derive the new error estimator.Although potentially any IMEX scheme can be applied, we consider the firstorder IMEX scheme (8) in this survey.Note that the second-order IMEX scheme is also used in [35].
Since the first-order IMEX scheme (8) differs from the black-box solver, we have a defect or a mismatch when we insert the solution snapshots x k (μ) computed from the black-box solver into the first-order IMEX scheme.
Although the time-integration scheme of the black-box solver is invisible, we can use the solution snapshots x k (μ), k = 0, . . ., K , at some samples of μ to learn the defect vector.We then use d k (μ) to correct the user-defined scheme (8), such that its solution recovers the solution x k (μ) computed by a black-box solver and the temporal discretization scheme of the black-box solver then becomes visible via the corrected time-discrete FOM as below, It is clear that if d k (μ) can be accurately learned, then not only x k c (μ) in ( 45) recovers x k (μ), but also the FOM in ( 45) is equivalent to the temporal discretization scheme of the black-box solver.The ROM of the FOM in ( 45) can be obtained as where Ât (μ), Ê(μ), f (•, •), b(μ), ĉ(μ) are defined as in (3) and dk (μ) = V T d k (μ).We make use of both the corrected FOM ( 45) and the corresponding ROM (46) to derive output error estimation for the output error |y k (μ) − ŷk (μ)|, where y k (μ) and ŷk (μ) are the outputs of the FOM in (2) and the ROM in (3) at any time instance t k , respectively.Both systems can be solved using any black-box solver.Given the FOM in (2), assuming that A t (μ) is non-singular for all μ ∈ P, the nonlinear function f (x(t, μ), μ) is Lipschitz continuous w.r.t.x(t, μ), and the defect vector d(μ) can be accurately learned, then the output error |y k (μ) − ŷk (μ)| of the ROM (3) can be estimated as [35] |y where is the residual induced by the d-corrected ROM (46), and ȳk c (μ (46).¯ k (μ), V, V du , xk du (μ) and r k du (μ) are defined as before.
The corrected output ȳk c (μ is defined a bit differently as in [35], where ȳk c (μ . Its corresponding dual system in [35]: is also slightly different from that in (25).Please also refer to Remark 7.However, derivation of the error estimator is very similar as that in [35] and is not repeated here.
When hyperreduction is considered, the ROM (3) becomes 6. ε = (μ * ). 7. End. where which is the d-corrected hyperreduced ROM.Error estimation for the output error of the ROM ( 48) is stated as where is the residual induced by the d-corrected ROM (49), and k I (μ) is the hyperreduction error defined as before.
Now we come to the problem of accurately learning d(μ), so that (47) gives an accurate error estimation for ROMs solved with black-box solvers.In [35], we have used proper orthogonal decomposition (POD) combined with radial basis function (RBF) interpolation or with feed forward neural network (FFNN) to learn d(μ).POD is first used to project d(μ) ∈ R N onto a lower-dimensional subspace.RBF or FFNN is then used to learn the projected short vector d(μ is computed from a two-stage singular value decomposition (SVD) of the snapshot matrix D := [d 0 (μ 1 ), . . ., d K (μ 1 ), . . ., d 0 (μ s ), . . ., d K (μ s )].Each d i (μ j ) is the defect vector evaluated at time instance t i and parameter sample μ j .All details can be found in [35].
Remark 12 While the new error estimator is based on our earlier proposed output error estimator in [31], the idea can be directly applied to derive a posteriori state error estimators (bounds).

Multi-fidelity error estimation
This section briefly reviews our recent multi-fidelity error estimation used for accelerating the weak greedy algorithm.Weak greedy algorithms are often used to iteratively construct the reduced basis (V) for MOR of parametric steady systems.A sketch of the algorithm is given as Algorithm 1.
Some key points for the greedy algorithm to converge fast are: a properly chosen training set , an efficient and fast-to-compute error estimator (bound) (μ).For some complex problems, although the cardinality of the training set is not large, computing (μ) over at each iteration is slow.In [36], we propose the concept of multi-fidelity error estimation to accelerate the greedy iteration.
We start with a rough training set c with even smaller cardinality, i.e. | c | ≤ | |, and try to evaluate (μ) only over c at each greedy iteration.At the same time, a surrogate estimator is constructed based on the already available values of (μ) over c .This surrogate is supposed to be more cheaply computed than (μ), so that it can be fast evaluated over a fine training set f with much larger cardinality than | |.Using the results of the surrogate estimator over f , we enrich c with the parameter sample selected by the surrogate.The selected parameter sample corresponds to the largest value of the surrogate.The parameter sample that corresponds to the smallest value of (μ) over c is simultaneously removed from c .This way, we can always keep c small over iterations, but c is kept being updated to only contain those important parameter samples.In the greedy process, those samples correspond to large ROM errors and are good candidates for greedy parameter selection in the next iterations.
This process of using a surrogate estimator in the greedy algorithm was originally proposed in [55] for time-evolution nonlinear systems.In [36], we define this as bi-fidelity error estimation, since both the original estimator (μ) and a surrogate estimator are used for estimating the error in the greedy process.Based on that, we further propose multi-fidelity error estimation which depends on the structure of the original error estimator (μ) [36].Taking the output error estimator o 3 (μ) as an example, two projection matrices V and V r should be constructed in order to compute o 3 (μ).When we replace (μ) in Algorithm 1 with o 3 (μ), we need to iteratively update both V and V r with snapshots by solving the FOM in (6) at two greedily selected parameter samples twice.If at a certain stage, e.g., when the estimated ROM error is smaller than a small value θ < 1, but is still larger than the error tolerance tol, we stop updating V r , then the FOM in (39) at one of the two selected parameter samples does not have to be solved.Consequently, we have saved runtime of solving a large FOM at the subsequent iterations.At the same time, the original o 3 (μ) is degraded to a low fidelity error estimator o 3 (μ).The surrogate estimator is then constructed based on this low fidelity estimator at the latter stage of the greedy process.Finally, we have employed the original estimator o 3 (μ), a low-fidelity estimator o 3 (μ), and their respective surrogates in the whole greedy process.We call this multi-fidelity error estimation.We sketch this concept in Fig. 1.It is shown in [36] that the greedy process employing multi-fidelity error estimation is much faster than the standard weak greedy algorithm for some large-scale time-delay systems with hundreds of delays.
The error estimators presented in the previous sections have been numerically compared in the individual papers.For an overview, we list them as below.Here, the sections are those in this survey where the corresponding error estimators are reviewed.
• In [29,30], the error estimator proposed there ("Error estimators for time-evolution systems" section) is numerically compared with the error bound in [19,27] ("Error bounds for time-evolution systems" section) for parametric time-evolution systems.• In [31], the error estimator with corrected output ("Error estimators for time-evolution systems" section) is numerically compared with the error estimator in [29,30] for parametric time-evolution systems.
Fig. 1 The concept of multi-fidelity error estimation in a greedy process, where (μ) represents any original error estimator, (μ) is a low-fidelity error estimator when we stop updating partial information of (μ), and s (μ) is a surrogate of (μ).Likewise, s (μ) is a surrogate of (μ).tol and ε are defined in Algorithm 1, tol < θ < 1 is a user-defined small value • In [38], the proposed state error estimator ("Error estimators for linear steady systems" section) is compared with the state error bound ("Error bounds for steady systems" section) for parametric steady systems from computational electromagnetics.• In [33], a newly proposed output error estimator o 1 (μ) ("Error estimators for linear steady systems" section) is compared with the output error bound (22) in [32] "Error bounds for steady systems" section) for parametric linear steady systems.It is also compared with an existing randomized error estimator from [54].• In [34], some more output error estimators are proposed and compared with each other; they are also compared with the output error estimator o 1 (μ) proposed in [33] ("Error estimators for linear steady systems" section).• In [35], a new error estimator ("Error estimator for ROMs solved with any black-box time-integration solver" section) which is applicable to the situation where both the FOM and the ROM are solved by a black-box solver, is compared with the output error estimator in [31] for parametric nonlinear time-evolution systems.• In [36], the multi-fidelity error estimation ("Multi-fidelity error estimation" section) is numerically compared with the standard greedy process with only a single highfidelity error estimator for time-delay systems with more than one hundred delays.

Inf-sup-constant-free error estimator for time-evolution systems
While the error estimators for time-evolution systems described in "Error estimators for time-evolution systems" section are accurate, their computation involves the quantities k (μ) and ¯ k (μ) for which the term A t (μ needs to be evaluated for every μ, where σ min (A t (μ)) is the smallest singular value of the matrix A t (μ).In function space, σ min (A t (μ)) corresponds to the inf-sup-constant of a linear operator [56].This poses two challenges.Firstly, the complexity of computing the smallest singular value is at least linearly dependent on N for each parameter sample.When the number of parameter samples is high (typical for problems with several parameters or parameters having a wide range), this can lead to significant increase of the offline computational cost.Secondly, for some applications the matrix A t (μ) could be poorly conditioned, leading to σ min (A t (μ)) close to zero, which could lead to blow up of the estimated error.While methods exist in the literature [56][57][58] to address the increased computational cost, these approaches are somewhat heuristic and a careful tuning of the involved parameters needs to be done to achieve good results.In the following theorem, we derive a new output error estimator applicable to time-evolution systems avoiding the inf-sup constant.
Remark 13 For the sake of exposition, we derive the new inf-sup-constant-free error estimator based on the derivation of the output error estimator in Theorem 5. But, a similar process can be repeated to derive inf-sup-constant-free versions of the error estimators presented in (34), (35) and (36).Furthermore, a straightforward extension of the inf-supconstant-free output error estimator is applicable to the output error estimator in (47) and ( 50) which deals with the case of black-box time-integration solvers.
Theorem 11 (Primal-dual inf-sup-constant-free output error estimator) For the time discrete FOM ( 8) and the time-discrete ROM (9), assume the time step δt is constant, so that A t (μ) does not change with time.Let all the assumptions in Theorem 5 be met, the output error e k o (μ) = y k (μ) − ŷk (μ) at the time instance t k can be bounded as Here, xdu (μ) and r du (μ) are defined in ( 26) and ( 27), respectively.
Proof We start with the expression of (32) from Theorem 5 and write Since A t (μ) does not depend on time, we can safely remove the superscript k from r du (μ).
Unlike what is done in (32) we do not apply the matrix sub-multiplicative property in the second line for the term [A t (μ)] −T r k du (μ) .The expression [A t (μ)] −T r du (μ) =: e du (μ) is seen to be the solution of the linear system We call the above linear system the dual-residual system corresponding to the dual system (25).Using this dual-residual system and the expression ỹ(μ) = ŷk (μ) + (V du xdu (μ)) T rk (μ) we write where xdu (μ) does not change with time, so that the superscript k is also removed.Defining ˘ (μ) := e du (μ) + V du xdu (μ) and using ρk (μ) = rk (μ) / r k (μ) , we get the desired error bound.
Finally, we approximate the ratio ρk (μ) with the quantity ρ(μ) to obtain the inf-supconstant-free output error estimator as Algorithm 2 Simultaneous construction of the projection bases for the inf-sup-constantfree output error estimator applicable to time-evolution systems Input: Dual system matrices A t (μ) T , c(μ) T , a training set composed of parameter samples taken from the parameter domain P, error tolerance tol < 1.
Output: Projection matrices V du and V e .

Computational aspects
In (52), evaluating ˘ (μ) involves determining e du (μ) by solving the dual-residual system (51) for every parameter sample μ.This step can be computationally expensive.
To address this, we propose to obtain a ROM for (51) such that we can approximate e du (μ) ≈ V e êdu (μ).The ROM reads [ Âe (μ)] T êdu (μ) = rdu (μ), (53) where Âe (μ) = V T e A t (μ)V e , rdu (μ) = V T e r du (μ).The dual-residual r du (μ) is the residual induced by the approximate solution V du xdu (μ) computed from the dual ROM (26).We propose a greedy algorithm in which V e and the projection matrix V du for the dual system ROM (26) are constructed simultaneously.For an appropriately computed V e , we have e du (μ) ≈ V e êdu (μ) and hence the inf-sup-constant-free error estimator (52) can be further approximated as with ˘ e (μ) := V e êdu (μ) + V du xdu (μ) .Next, the greedy algorithm to simultaneously construct V du and V e is detailed.

Simultaneous and greedy construction of V du and V e
The greedy algorithm is sketched in Algorithm 2. The inputs to the algorithm are the system matrices corresponding to the dual system, viz., A t (μ) T , c(μ) T , a properly chosen training set and a tolerance tol.The outputs resulting from the algorithm are the two projection bases V du and V e which are needed to evaluate ˘ e (μ) in the inf-sup-constant-free error estimator (54).In Step 1, the initial greedy parameters μ * and μ * e are initialized, ensuring that μ * = μ * e .The projection matrices V du , V 0 e and V e are initialized as empty matrices.In Steps 3 and 5, the FOM ( 25) is evaluated at μ * and μ * e , respectively.The resulting dual system snapshots are then used to update V du and V 0 e in Steps 4 and 6, respectively, using e.g., the modified Gram-Schmidt process with deflation.Step 7 involves constructing the projection matrix V e .Following this, the ROM in ( 53) is solved to evaluate V e êdu (μ) ∀μ ∈ , which is then used as an error estimator to choose the next greedy parameter μ * in Step 8. Furthermore, in Step 9, the norm of the residual r du (μ) − [A t (μ)] T V e êdu (μ) induced by the ROM (53) of the dual-residual system is evaluated to determine the second greedy parameter μ * e for the next iteration.In Step 10, the maximum estimated error at the current iteration is set to be the maximum estimated error in Step 8, i.e., = V e êdu (μ) .

Remark 14 In
Step 8, we have used the criterion V e êdu (μ) to select the parameter μ * for constructing V du for the ROM (26).Recalling the state error estimator (37) for steady parametric systems proposed in (37), it is easy to see that V e êdu (μ) is exactly the state error estimator for the state error x du (μ) − xdu (μ) of the ROM (26).We use this state error estimator to iteratively construct the projection matrix V du for the ROM (26).In order to evaluate the state error estimator, we also need to construct V e .In [38], we have explained how to construct V e in detail.In particular, a different criterion is used for greedy construction of V e , i.e. the norm r du (μ) − [A t (μ)] T V e êdu (μ) to avoid μ * = μ * e .The vector r du (μ) − [A t (μ)] T V e êdu (μ) is nothing but the residual vector induced by the ROM (53) of the dual-residual system (51).
To obtain the ROM (3) corresponding to the FOM (2), we apply the adaptive POD-Greedy algorithm [31] to construct the projection matrix V.As the first test case (TC1), we apply the adaptive POD-Greedy algorithm which uses the output error estimator presented in Theorem 5.For the second test case (TC2), we apply the adaptive POD-Greedy algorithm with the new inf-sup-constant-free error estimator ˜ iscf (μ) (54).To compute the projection bases for evaluating (54), we make use of Algorithm 2. We first run Algorithm 2 to obtain the projection matrices V du , V e , as well as the reduced quantities xdu (μ), êdu (μ) corresponding to V du , V e .During each iteration of the POD-Greedy algorithm, those quantities are then used to compute the output error estimator ˜ iscf (μ) (54).

Numerical examples
Next, we illustrate the benefits of using the inf-sup-constant-free error estimator ˜ iscf (μ) in (54) with two numerical examples: the Burgers' equation and the FitzHugh-Nagumo equations.It is demonstrated that, firstly, the inf-sup-constant-free error estimator offers accurate performance when used in the POD-greedy algorithm to construct V. Secondly, the new approach yields a significant reduction of the offline computational costs by avoiding solving several large-scale eigenvalue problems for obtaining the inf-sup constant.
For the adaptive POD-Greedy greedy algorithm, we plot the maximum estimated errors computed using the respective error estimators for TC1 and TC2 over the training set at every iteration.We define this as: where (μ) is either (33) in case of TC1 or (54) in case of TC2.

1-D Burgers' equation
The viscous Burgers' equation defined in the 1-D domain := [0, 1] is given by is the spatial variable and the time variable t ∈ [0, 2].We spatially discretize (55) with the finite difference method.The mesh size is z = 0.001, which results in a discretized FOM of dimension N = 4000.As the variable parameter, we consider the viscosity μ ∈ P := [0.005, 1].The output variable of interest is the value of the state at the node just before the right boundary.The ROM tolerance is set to be tol = 1×10 −4 .We generate 100 sample points in P using np.logspace in python, out of which 80 randomly chosen samples constitute the training set.
For TC1, we first use the standard greedy Algorithm 1 to compute the projection basis V du for the ROM of the dual system.The error estimator used in Algorithm 1 is the state error bound (17) so that the inf-sup constants [A t (μ)] −1 ∀μ ∈ are pre-computed before starting the greedy iteration.These are then used to evaluate the output error estimator (33) during the POD-greedy algorithm for constructing V.The greedy Algorithm 1 for constructing V du converges in 1.1 s.However, computing the inf-sup constants took 164.8 s.For solving the eigenvalue problem at every parameter, we make use of the scipy library for Python.The POD-greedy needs 255.7 s to converge, with the ROM dimension n = 4.
In the case of TC2, Algorithm 2 is first used to compute the projection bases V du and V e simultaneously.This requires a total time of 0.98 s.The POD-greedy algorithm using the inf-sup-constant-free error estimator takes the same 255.7 s to converge.The final ROM dimension is also n = 4. Convergence of the POD-greedy algorithm for ROMs generation in the case of TC1 and TC2 is plotted in Fig. 2. It is clear that using the inf-sup-constant-Fig. 2 1-D Burgers' equation: error (estimator) decay for TC1 and TC2 free error estimator results in little loss of accuracy in the error estimation, while speeding up the offline basis generation by 1.6×.

2-D Burgers' equation
We next consider the 2-D coupled Burgers' equation in the square domain := [0, 2] × [0, 2].The governing equations are as follows: We impose the following Dirichlet boundary conditions: The initial conditions at time t = 0 are given by with φ 1 = φ 2 = 10 e −(z 1 −0.8) 2 −(z 2 −1.0) 2 .In (56), v 1 (z 1 , z 2 , t) and v 2 (z 1 , z 2 , t) denote the state variables and represent, respectively, the velocity components in the canonical x and y directions.Further, (z 1 , z 2 ) ∈ and t ∈ [0, 1].Similar to the 1-D case, we spatially discretize the 2-D Burgers' equation using the finite difference method with a step size z 1 = z 2 = 0.011 (90 divisions along both x-axis and y-axis).This results in a coupled FOM of dimension N = 2 • 8100.The viscosity μ ∈ P := [0.01, 1] is the parameter of interest.As the output, we take the mean of x-component velocities in the region := [0.7, 1.4] × [0.7, 1.4].A first-order implicit-explicit scheme with time step size t = 0.0025 is used.The ROM tolerance is tol = 1 × 10 −3 .We generate 60 logspacesampled (with np.logspace in python) points from P, out of which 48 samples are randomly chosen to constitute the training set.
For TC1, the computation of the dual system projection matrix V du takes 36.5 s and computing the inf-sup-constant by solving an eigenvalue problem for every parameter in the training set took 3, 380 seconds.Following this, Algorithm 1 is used to obtain the projection matrix V.It requires 5, 808 seconds to reach the desired tolerance of 1 × 10 −3 in 11 iterations.The ROM dimension is n = 44.
The simultaneous generation of V du and V e with Algorithm 2 needs 63 s in case of TC2.The POD-Greedy algorithm using the inf-sup-constant-free error estimator takes 5, 811 seconds, which is close to the time taken by the greedy algorithm in case of TC1.
The resulting ROM has the same dimension as before, viz., n = 44.The convergence of the estimated and true errors of TC1 and TC2 are plotted in Fig. 3. Evidently, the use of the inf-sup-constant-free output error estimator results in no loss of the accuracy of the estimated error.The overall speedup achieved in case of TC2 is 1.6-fold, compared to the offline time for TC1.However, since the system is of much larger dimension than the 1-D case, the offline time of computing the inf-sup-constants takes much longer time: 3,380 s.This certifies that using the inf-sup-constant-free error estimator has saved almost one hour of offline computational time.

FitzHugh-Nagumo equations
The FitzHugh-Nagumo system models the response of an excitable neuron or cell under an external stimulus.It finds applications in a variety of fields such as cardiac electrophysiology and brain modeling.The nonlinear coupled system of two partial differential equations defined in the domain := [0, L] is given below:  As done for the previous example, we first consider TC1 for the FitzHugh-Nagumo system.In this example, the greedy algorithm needs 1.6 s to obtain V du while computing the inf-sup constants takes 174.7 s.The POD-greedy algorithm based on the error estimator in Theorem 5 converges to the desired tolerance in 8 iterations, taking 291.2 s.The resulting ROM is of dimension n = 48.Applying TC2 to this example, Algorithm 2 requires just 3.6 s to obtain V du and V d .The POD-greedy algorithm converges in 8 iterations and the runtime is 292 s.The ROM dimension is again n = 48.The convergence of the POD-greedy algorithm for ROM generation in the case of TC1 and TC2 is plotted in Fig. 4. Likewise, using the inf-supconstant-free error estimator results in no loss of accuracy, but ends up with 1.6× speedups.For both examples, the inf-sup constants, i.e., the smallest singular values of A(μ) at all the training samples of μ are close to 1, so that the effectivity of the inf-sup-constantfree error estimator in TC2 has almost no difference from that in TC1.This can be seen from Figs. 2-4.

Conclusion
A posteriori error estimation is vital not only to quantify the accuracy of ROMs but also to construct ROMs of small dimension, in a computationally efficient and adaptive manner.In this review, we have presented a wide range of a posteriori error estimators applicable to (non)-linear parametric systems, covering both steady and time-dependent systems.Furthermore, we have also discussed multi-fidelity error estimators as a means to improve the computational efficiency of error estimation.As a novel contribution, we have introduced an inf-sup-constant-free output error estimator that is applicable to nonlinear time-dependent systems.This new error estimator is attractive for its improved efficiency and also its ability to be applicable to systems with a potentially ill-conditioned left hand side matrix, e.g.A t (μ) with smallest singular values being close to zeros.Results on three numerical examples were used to illustrate the reduced computational costs offered by the inf-sup-constant-free output error estimator, which is achieved with smaller effort but with little loss of accuracy.Going ahead, we envisage an important potential for accurate error estimation in applications such as digital twins where model updates can be done on-the-fly based on the accuracy quantified by error estimators.