Skip to main content


Reduced-order modelling of parametric systems via interpolation of heterogeneous surrogates


This paper studies parametric reduced-order modeling via the interpolation of linear multiple-input multiple-output reduced-order, or, more general, surrogate models in the frequency domain. It shows that realization plays a central role and two methods based on different realizations are proposed. Interpolation of reduced-order models in the Loewner representation is equivalent to interpolating the corresponding frequency response functions. Interpolation of reduced-order models in the (real) pole-residue representation is equivalent to interpolating the positions and residues of the poles. The latter pole-matching approach proves to be more natural in approximating system dynamics. Numerical results demonstrate the high efficiency and wide applicability of the pole-matching method. It is shown to be efficient in interpolating surrogate models built by several different methods, including the balanced truncation method, the Krylov method, the Loewner framework, and a system identification method. It is even capable of interpolating a reduced-order model built by a projection-based method and a reduced-order model built by a data-driven method. Its other merits include low computational cost, small size of the parametric reduced-order model, relative insensitivity to the dimension of the parameter space, and capability of dealing with complicated parameter dependence.


Model order reduction (MOR), as a flourishing computational technique to accelerate simulation-based system analysis in the last decades, has been applied successfully to high-dimensional models arising from various fields such as circuit simulations [1,2,3], (vibro) acoustics [4], design of microelectromechanical systems [5], and chromatography in chemical engineering [6]. The objective of MOR is to compute a reduced-order model (ROM) of small size k that captures some important characteristics of the original high-dimensional model of size n (normally \(k \ll n\)), such as dominant moments and leading Hankel singular values. When parametric studies are performed, it is desired to build a parametric ROM (pROM), which not only incorporates the modeling parameters as free parameters, but also approximates the input–output behavior of the full-order model (FOM) well for any parameter value within the domain of interest. Therefore, many parametric MOR (PMOR) methods have been proposed; see, e.g., the recent survey [7].

In general, PMOR methods can be categorized into two general types according to PMOR review paper [7]:

  1. 1.

    Building a single pROM by using a global basis over the parameter space. These methods [8] have received intensive research attention and proved to be efficient in many applications in the past few years, most notably the reduced basis method [9]. Despite its success in many application fields, these methods normally meet difficulty in dealing with many parameters since both the computational cost and the size of the basis matrices may grow exponentially with the dimension of the parameter space because of the curse of dimensionality.

  2. 2.

    Building an interpolatory pROM by interpolating local matrices or bases at parameter samples, e.g., each of which is obtained by applying nonparametric MOR at the corresponding parameters. These methods are less studied and their performance is not so satisfactory. According to [7], this approach can be again categorized into three types:

    1. a.

      Interpolation among local basis matrices [10]. Since the basis matrices \(V_i\in \mathbb {R}^{n \times k}\) are elements on the Stiefel manifold, this method interpolates \(V_i\) along the geodesic on the Stiefel manifold with the help of the tangent space. However, assuming that the basis matrix evolves on the geodesic is only heuristic: it is only one of the infinite possible evolution paths on the manifold. According to our experience in numerical tests, the resulting ROM often diverges. Another disadvantage is that, this method requires the storage of all the basis matrices, which may not only be expensive, but also infeasible in many cases. First, when the ROMs are built by a non-projection-based MOR method, e.g., data-driven MOR methods or MOR methods based on physical reasoning, no bases are computed. In addition, in many practical applications, the parametric FOM is not available and what we can obtain are nonparametric FOMs built for some parameter samples, which may cause difficulty for these methods. For example, these FOMs may be obtained from discretization of partial differential equations with different meshes, may also be of different dimensions, and their realizations may be not consistent.

    2. b.

      Interpolation among local reduced-order state-space matrices [11, 12]. As is shown in [11], directly interpolating the local reduced-order state-space matrices does not work in general. A major difficulty is that, a dynamical system has infinitely many realizations and interpolating between different realizations can result in completely wrong results. For example, a system with the same input–output behavior can be obtained by rearranging the rows of the matrices, but interpolating two such system matrices normally makes no physical sense, e.g., interpolating between \({\left[ \begin{array}{cc} K(p_1) &{} C \\ 0 &{} I \end{array} \right] }\) and \({\left[ \begin{array}{cc} 0 &{} I \\ K(p_2) &{} C \end{array} \right] }\), where K(p), C are square matrices with the same size and I and 0 are corresponding identity and zero matrices, respectively. Therefore, a natural idea is to apply a congruence transformation to obtain consistent bases, i.e., by solving an optimization problem to get the transformation matrices, and then interpolate these consistent ROMs on some manifold [12]. Another choice is to conduct a singular value decomposition (SVD) on the union of all basis matrices to calculate the dominant “global subspace”, onto which we re-project all ROMs and conduct interpolation [11]. However, like the methods discussed in 2(a), the ROMs must be of the same dimension and all bases have to be stored.

    3. c.

      Interpolation among the local frequency response functions (FRFs) [13, 14]. We will show later in the paper that interpolating Loewner ROMs [15] built from the local FRFs is equivalent to interpolating the local FRFs. Therefore, interpolation among the local FRFs can be seen as a special case of interpolation among the local reduced-order state-space matrices. Furthermore, we propose a further technique to compress the Loewner ROMs to save the storage space. Although this method is intuitive and easy to implement, it suffers from the problem of fixed poles: the positions of the poles do not change with the parameter, but are rather determined by the parameter values used for interpolation.

The goal of the present paper is twofold:

  • This paper will propose a pole-matching reduced-order modeling method that interpolates linear multiple-input multiple-output (MIMO) ROMs in the frequency domain. Inspired by modal analysis in mechanical engineering [16], the pole-matching method relies completely on analyzing the positions and residues of the poles, rather than trying to recover the state vector of the FOM. We propose to first convert all ROMs to a unified realization, namely the pole-residue realization that stores the positions and residues of poles explicitly. Then, we match the poles according to their positions and residues in order to capture the evolution of the poles in the parametric system. Finally, we interpolate the positions and residues of all matched poles to obtain the parametric ROMs. This method does not require the storage of basis matrices, is capable of interpolating ROMs of different sizes, and works even when a parametric FOM does not exist, e.g., when ROMs are built by a data-driven MOR method or when FOMs at different parameter values result from different discretization methods or different grids. It can also interpolate ROMs of different nature, e.g., interpolating a ROM built by a mathematical MOR method and another ROM obtained from physical reasoning. It is relatively insensitive to the number of parameters and does not assume specific properties of the FOM, e.g., the affinity property required by the reduced basis method [9]. Numerical results show that the pole-matching method is more accurate than the previously proposed methods for our test cases.

  • The other goal of this paper is to show the importance of the realization of a dynamical system w.r.t. interpolation. For comparison purposes, we will develop another MIMO PMOR method in the frequency domain, namely the interpolation method for Loewner ROMs. We will show that interpolating the ROMs built by the Loewner framework in the original form is equivalent to interpolating the FRFs. Furthermore, we will also discuss the interpolation of Loewner ROMs in the compressed form, which is more efficient w.r.t. computation and storage. Unlike the pole-matching method, which captures the change of positions and residues of the poles, the interpolation of Loewner ROMs builds parametric ROMs with fixed pole positions. Therefore, using different realizations, the parametric ROMs follow different evolution paths. Although both methods have clear physical meanings, the interpolation method for Loewner ROMs follows the path that the real-world system is unlikely to take, while the pole-matching method follows a more natural path and provides accurate parametric ROMs for all our test cases.

Let us emphasize here that the proposed method is more a reduced-order parametric modeling approach than a new PMOR method. Our pole-matching approach assumes the availability of locally valid surrogate models of the FOM. These are assumed to be obtained at feasible sampling points in parameter space, but they can result from various surrogate modeling methods, like

  • projection-based (or any other computational) MOR methods that compute a non-parametric ROM at the fixed parameter value,

  • data-driven approaches like the Loewner framework or dynamic mode decomposition [17],

  • system identification methods,

  • etc.

We do not even assume that the local surrogates that we interpolate are obtained from the same approach—we can employ a mixture of surrogate models obtained by any of the methods listed above. A particular suitable area of application for our method would be the situation when only an oracle is available, e.g. a running code producing either a state-space model given a fixed parameter or an input–output sequence of time series or frequency-response data. Our approach is fully non-intrusive as it does not require any further knowledge on the system!


Throughout the paper, \(\imath \) is the imaginary unit, \(M^ T \) represents the transpose of the matrix M, and \(M_\beta ^{\alpha , T }\) denotes \((M_\beta ^{\alpha })^ T \).


In this section, we will briefly review two different types of (P)MOR methods: projection-based methods and the Loewner framework, which is a data-driven (P)MOR method. Though we do not explicitly use any of these methods in our approach, we will frequently refer to them and will also use them for comparison purposes in the numerical examples section. Therefore, we include this brief review for better readability.

Projection-based (P)MOR

This paper focuses on PMOR of state-space systems in the frequency-domain:

$$\begin{aligned} \big ( s \mathcal {E}(p) - \mathcal {A}(p) \big ) X(s,p)&= \mathcal {B}(p)u(s), \nonumber \\ Y(s,p)&= \mathcal {C}(p) X(s,p), \end{aligned}$$

where \(\mathcal {E}(p), \mathcal {A}(p) \in \mathbb {R}^{n \times n}\), \(\mathcal {B}(p) \in \mathbb {R}^{n \times m_I}\), \(\mathcal {C}(p)\in \mathbb {R}^{m_O \times n}\), \(u(s)\in \mathbb {R}\) and \(p\in \mathcal {D}\). A projection-based PMOR method [7, 18] first builds two bases \(Q,U \in \mathbb {R}^{n\times k}\) (normally, \(k \ll n\)), and then approximates \(X(s,p)\approx U x(s,p)\) (\(x(s,p)\in \mathbb {R}^k\)) in the range of U, and finally forces the residual to be orthogonal to the range of Q to obtain the pROM:

$$\begin{aligned} \big ( s E(p) - A(p) \big ) x(s,p)&= B(p)u(s), \nonumber \\ y(s,p)&= C(s,p) x(s,p), \end{aligned}$$

where \([E(p), A(p)]=Q^T[\mathcal {E}(p),\mathcal {A}(p)]U\), \(B(p)=Q^T \mathcal {B}(p)\) and \(C(p)=\mathcal {C}(p)U\).

Now we discuss some subcategories under the general framework presented above:

  • (P)MOR based on projection using local bases These methods build the bases Q and U only using the data computed at a given parameter value, say \(p_0\). The resulting ROM is valid for \(p_0\), but if derivative information with respect to p is also included in Q and U, we can obtain a local pROM [8, 19, 20]. This type of (p)ROMs can be used as the building blocks in our proposed pole-matching PMOR method. When derivative information is included in the bases, we can achieve Hermitian interpolation in our proposed method, which has the potential of reducing the number of needed parameter samples in order to cover a specific region in the parameter space.

  • PMOR based on projection using global bases. Many PMOR methods build pROMs by projecting the nonparametric ROM onto a global subspace that contains the data obtained at different parameter values of p, namely \(p_1\), \(p_2\), ..., \(p_j\) [7, 8, 11]. Denoting the bases built by a nonparametric MOR method at \(p_i\) by \(U_i\), the global bases U can be obtained by computing the SVD of \([U_1, U_2, \ldots , U_j ]\).

  • PMOR based on interpolating local bases These methods interpolate the bases pre-computed at different p values, say \(U_i\) for \(p_i\), to compute the bases for the requested parameter value \(p_*\) [10, 12]. A straightforward interpolation normally does not work due to two reasons:

    1. 1.

      The bases \(U_1\), \(U_2\), ..., \(U_j\) may be inconsistent. Suppose that the parametric dynamics is well captured by the family of subspaces \(\mathcal {U}(p)\,{:=}\,\mathop {\mathrm {colspan}}\{U(p)\}\). Further, let \(\mathcal {U}_1\), \(\mathcal {U}_2\), ..., \(\mathcal {U}_j\) well represent \(\mathcal {U}(p)\) at \(p_1\), \(p_2\), ..., \(p_j\). Then, intuitively, the interpolation of these subspaces is meaningful. Nevertheless, the basis matrices \(U_i\) for these subspaces computed by a MOR method using sampling at \(p_i\) will generally yield reduced-order models with states living in different coordinate systems as for any orthogonal \(K\in \mathbb {R}^{k \times k}\), we have \(\mathop {\mathrm {colspan}}\{U_i K\}=\mathop {\mathrm {colspan}}\{U_i\}=\mathcal {U}_i\). Hence, if we just interpolate the computed bases \(U_{i-1}\) and \(U_i\), we consequently interpolate states in different coordinate systems which leads to inconsistencies and poor numerical approximation in state-space. Therefore, in the general case, one should try to compute consistent bases before we conduct interpolation. In [12], it was proposed to compute the bases \(U_i K\) for \(p_{i}\) “as consistent as possible” with the basis \(U_{i-1}\) for \(p_{i-1}\), by aligning the subspaces via solving the optimization (Procrustes) problem

      $$\begin{aligned} \min _{K\in \mathbb {R}^{k \times k}: K^TK=I} \Vert U_{i}K-U_{i-1}\Vert . \end{aligned}$$
    2. 2.

      Directly interpolating orthonormal matrices \(U_1\), ..., \(U_j\) normally does not result in an orthonormal matrix. For example, if \(U_2 = -U_1\), a direct linear interpolation at \(\frac{p_1+p_2}{2}\) gives 0, which is apparently not a basis. Therefore, the success of these methods require interpolating on the correct manifold, e.g., on the Grassmann manifold or the Stiefel manifold.

  • PMOR based only on (nonparametric) ROMs These methods build the pROM totally based on ROMs, which may be not generated by projection-based (P)MOR methods. Even if they are generated by projection-based (P)MOR methods, it is assumed that the bases U and Q are unknown. For the ROMs built by data-driven methods, e.g., the Loewner framework, we are in this situation.

MOR using the Loewner framework

The objective of MOR using the Loewner framework [15] is to construct a ROM in the frequency domain in the form of

$$\begin{aligned} \left\{ \begin{array}{rcl} (sE-A)x&{}=&{}Bu,\\ y&{}=&{}Cx, \end{array} \right. \end{aligned}$$

using measured samples of the FRF:

$$\begin{aligned} H(s)=C(sE-A)^{-1}B. \end{aligned}$$

The Loewner framework assumes that the FRF is only known as a black box: the matrices E, A, B and C are assumed to be unknown but the FRF can be measured at frequency samples. For MIMO systems, each parameter sample is paired with either a left or right tangential direction for the measurement:

  1. 1.

    A right triplet sample: \((\lambda _i , r_i , w_i)\). Given a frequency sample \(\lambda _i\) and the right tangential direction \(r_i \in \mathbb {C}^{m_I \times 1}\), we measure \(w_i = H(\lambda _i)r_i\).

  2. 2.

    A left triplet sample: \((\mu _j , \ell _j , v_j )\). Given a frequency sample \(\mu _j\) and the left tangential direction \(\ell _j \in \mathbb {C}^{1 \times m_O}\), we measure \(v_j =\ell _j H(\mu _j)\).

Given \(n_R\) right samples and \(n_L\) left samples, the Loewner framework computes the Loewner Matrix \(\mathbb {L} \in \mathbb {R}^{n_L\times n_R}\) with

$$\begin{aligned} {[}\mathbb {L}]_{i\,j}=\frac{v_i r_j-\ell _i w_j}{\mu _i-\lambda _j}, \end{aligned}$$

and the Shifted Loewner Matrix \(\mathbb {L}_\sigma \in \mathbb {R}^{n_L \times n_R}\) with

$$\begin{aligned} {[}\mathbb {L}_\sigma ]_{i\,j}=\frac{\mu _i v_i r_j-\ell _i w_j\lambda _j}{\mu _i-\lambda _j}. \end{aligned}$$

Using the Loewner matrix and the shifted Loewner matrix, a ROM can be constructed as follows [15].

  • The Loewner ROM in the original form can be constructed as:

    $$\begin{aligned} E=-\mathbb {L},&\qquad B=V=[v_1,v_2,\ldots ,v_{n_L}], \nonumber \\ A=-\mathbb {L}_\sigma ,&\qquad C=W=[w_1,w_2,\ldots ,w_{n_R}]. \end{aligned}$$

    It is highly likely that the matrix pencil \((\mathbb {L}_\sigma ,\mathbb {L})\) is (numerically) singular, but even in that case, the original Loewner ROM defined in (8) serves as a singular representation of an approximated FRF.

  • The Loewner ROM in the compressed form is a concise and regular ROM computed from the Loewner ROM in the original form. First, we compute a rank-revealing SVD in a compressed form:

    $$\begin{aligned} s\mathbb {L}-\mathbb {L}_\sigma =Y\Sigma X^* \approx Y_k \Sigma _k X_k^*, \end{aligned}$$

    where s is a frequency sample freely chosen from the set \(\{\lambda _i\} \cup \{\mu _j\}\), and k is the number of dominant singular values chosen for the truncated SVD. Then, a Loewner ROM in the compressed form is constructed as [15]:

    $$\begin{aligned} E=-Y_k^* \mathbb {L} X_k,\quad A=-Y_k^*\mathbb {L}_\sigma X_k,\quad B=Y^*_k V, \quad C=W X_k. \end{aligned}$$

The Loewner framework has been extended to accommodate parametric systems as a data-driven PMOR method [21]. However, in “Interpolatory PMOR in the Loewner realization” section, we will study another possibility, namely the interpolation of Loewner ROMs.

The pole-matching PMOR method based on the pole-residue realization

In this section, we will first introduce the pole-residue realization for ROMs and develop a pole-matching PMOR method for single-input single-out (SISO) systems, part of which was covered in our conference paper [22]. Then, we will generalize the pole-matching PMOR method to interpolate MIMO ROMs in “The pole-matching method for MIMO systems” section.

The pole-matching method relies exclusively on ROMs at samples

$$\begin{aligned} \big (s I^{(i)}-A^{(i)} \big ) x(s)&= B^{(i)}u(s),\nonumber \\ y(s)&=C^{(i)}x(s), \end{aligned}$$

where \(A^{(i)}\in \mathbb {R}^{k\times k}\), \(B^{(i)}\in \mathbb {R}^{k \times r_I}\), \(C^{(i)}\in \mathbb {R}^{r_O \times k}\), \(I^{(i)}\in \mathbb {R}^{k\times k}\) is the identity matrix, and the ROM \(\big (A^{(i)},B^{(i)},C^{(i)} \big )\) is built for the parameter value \(p_i\) (\(i=1,2,\ldots ,n_p\)). It is not required that the same MOR method is used to build ROMs for all \(p_i\) values.

However, when we do apply a projection-based MOR method to the FOM (1) to obtain ROMs in the form of (2) at each parameter sample \(p_i\), it is easy to obtain the ROMs that we need here in the form of (11) as long as \(E(p_i)\) is nonsingular by assigning \(A^{(i)} \leftarrow E(p_i)^{-1}A(p_i)\), \(B^{(i)} \leftarrow E(p_i)^{-1}B(p_i)\), and \(C^{(i)} \leftarrow C(p_i)\).

Remark 1

The assumption of nonsingular \(E(p_i)\) is satisfied in many important cases. For example, it always holds when we apply a projection-based MOR method to reduce a system of parametric ordinary differential equations (ODEs) (1) at \(p_i\) because \(E(p_i)=Q^T\mathcal {E}(p_i)U\), Q and U are both of rank k, and \(\mathcal {E}(p_i)\) is nonsingular in a system of ODEs. In many situations, e.g., when the model stems from a parametric finite-element model, E will be constant and nonsingular ((projected) mass matrix), and this is the typical class of models we are considering here. For such models, a parameter-dependent E(p) may occur if the geometry of the domain on which the finite-element model is constructed is parameterized. Still, then E(p) will be a mass matrix and will in general be nonsingular, also after (Petrov-)Galerkin projection. Even for differential-algebraic equations with differentiability index one, classical model reduction techniques like balanced truncation [18] or IRKA [23] usually yield reduced-order models with nonsingular E by allowing a nonzero feedthrough-term Du in the output equation; see [24] for details.

The pole-residue realization for SISO systems

Before presenting the MIMO case in “The pole-matching method for MIMO systems” section, we focus on the pole-matching method for SISO systems, i.e., systems in the form of (11) with \(r_I = r_O = 1\), and for simplicity of notation, we denote \(B_j = B_{j,1}\) and \(C_j = C_{1,j}\). In this section, we focus on a single ROM built at the parameter sample \(p_i\) and omit the index \(\cdot ^{(i)}\) in the system (11) for simpler notation:

$$\begin{aligned} \big (s I-A \big ) x&= B,\nonumber \\ y&=C x. \end{aligned}$$

For now, we assume that the matrix A is nonsingular and all its eigenvalues are simple: the more complicated cases will be discussed in “Practical considerations” section. For a real eigenvalue \(\lambda _j\) and its corresponding eigenvector \(v_j\), we have

$$\begin{aligned} A v_j = \lambda _j v_j, \end{aligned}$$

while for the conjugate complex eigenpairs \((a_j \pm \imath b_j, r_j \pm \imath q_j)\), the definition \(A(r_j \pm \imath q_j)=(a_j \pm \imath b_j)(r_j \pm \imath q_j)\) leads to:

$$\begin{aligned} A \left[ r_j \quad q_j\right] = \left[ r_j \quad q_j\right] \left[ \begin{array}{cc} a_j &{} b_j \\ -b_j, &{} a_j \end{array} \right] . \end{aligned}$$


$$\begin{aligned} \Lambda = \left[ \begin{array}{cccc} \Lambda _1 \\ &{} \Lambda _2 \\ &{}&{} \ddots \\ &{}&{}&{} \Lambda _m \end{array} \right] , \qquad P=\left[ P_1, P_2, \ldots , P_m\right] , \end{aligned}$$

where for the single real eigenpair \((\lambda _j, v_j)\),

$$\begin{aligned} \Lambda _j = \left[ \lambda _j \right] , \qquad P_j=\left[ v_j \right] \qquad \text {and} \qquad m_j=1, \end{aligned}$$

while for the conjugate complex eigenpairs \((a_j \pm \imath b_j, r_j \pm \imath q_j)\),

$$\begin{aligned} \Lambda _j = \left[ \begin{array}{cc} a_j &{} b_j \\ -b_j, &{} a_j \end{array} \right] , \qquad P_j=\left[ r_j \quad q_j \right] \qquad \text {and}\qquad m_j=2. \end{aligned}$$

To facilitate the later more generic discussion for semisimple and defective eigenvalues, we assume \(\Lambda _j \in \mathbb {R}^{m_j \times m_j}\) and \(P_j \in \mathbb {R}^{k \times m_j}\) in general with \(m_j\) a possibly larger integer.

Then, the complex eigenvalue decomposition is described by the following real matrix Eq. [25]:

$$\begin{aligned} A P =P\Lambda , \end{aligned}$$

and it follows the associated similarity transformation

$$\begin{aligned} A=P \Lambda P^{-1}. \end{aligned}$$


$$\begin{aligned} y=C ( s I - A )^{-1}B = C ( s I - P \Lambda P^{-1} )^{-1}B=CP(s I- \Lambda )^{-1}P^{-1}B. \end{aligned}$$


$$\begin{aligned} C^\text {I}=CP=[C^\text {I}_1, C^\text {I}_2, \ldots , C^\text {I}_m], \qquad B^\text {I}=P^{-1}B=[ B^\text {I,T}_1, B^\text {I,T}_2, \ldots , B^\text {I,T}_m]^\text {T}, \end{aligned}$$

where \(C^\text {I}\in \mathbb {R}^{1 \times k}\), \(B^\text {I}\in \mathbb {R}^{k \times 1}\), \(C^\text {I}_j\in \mathbb {R}^{1 \times m_j}\), and \(B^\text {I}_j\in \mathbb {R}^{m_j \times 1}\). Then, we derive

$$\begin{aligned} y=\sum _{i=1}^m C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j. \end{aligned}$$

For the real eigenpair \((\lambda _j, v_j)\), \(C^ I _j\) and \(B^\text {I}_j\) are scalars, with which we define

$$\begin{aligned} C^\text {II}_j=C^\text {I}_j B^\text {I}_j \quad \text {and} \quad B^\text {II}_j=1 \end{aligned}$$

and derive

$$\begin{aligned} C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j = \frac{C^\text {I}_jB^\text {I}_j}{s-\lambda _j}=C^\text {II}_j (s I - \Lambda _j )^{-1} B^\text {II}_j, \end{aligned}$$

while for the conjugate complex eigenpairs \((a_j \pm \imath b_j, r_j \pm \imath q_j)\), we first define \(C^\text {I}_j=[C^\text {I}_{j,1},C^\text {I}_{j,2}]\in \mathbb {R}^{1 \times 2}\) and \(B^\text {I}_j=[B^\text {I}_{j,1},B^\text {I}_{j,2}]^\text {T}\in \mathbb {R}^{2 \times 1}\), and then derive

$$\begin{aligned}&C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j\nonumber \\&\quad = \frac{C^\text {I}_j \left[ \begin{array}{cc} s-a_j &{} b_j \\ -b_j &{} s-a_j \end{array}\right] B^\text {I}_j}{(s-a_j)^2+b_j^2} \nonumber \\&\quad =\frac{\left( C^\text {I}_{j,1}B^\text {I}_{j,1} + C^\text {I}_{j,2}B^\text {I}_{j,2}, C^\text {I}_{j,2}B^\text {I}_{j,1} - C^\text {I}_{j,1}B^\text {I}_{j,2}\right) \left[ \begin{array}{cc} s-a_j &{} b_j \\ -b_j &{} s-a_j \end{array}\right] \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] }{(s-a_j)^2+b_j^2} \nonumber \\&\quad =C^\text {II}_j (s I - \Lambda _j )^{-1} B^\text {II}_j, \end{aligned}$$

where we define

$$\begin{aligned} C^\text {II}_j=(C^\text {I}_{j,1}B^\text {I}_{j,1} + C^\text {I}_{j,2}B^\text {I}_{j,2}, C^\text {I}_{j,2}B^\text {I}_{j,1} - C^\text {I}_{j,1}B^\text {I}_{j,2}) \quad \text {and} \quad B^\text {II}_j=[1,\, 0]^\text {T}. \end{aligned}$$


$$\begin{aligned} y&=\sum _{j=1}^m C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j \nonumber \\&=\sum _{j=1}^m C^\text {II}_j (s I - \Lambda _j )^{-1} B^\text {II}_j \nonumber \\&=C^\text {II} (s I - \Lambda )^{-1} B^\text {II}, \end{aligned}$$

where \(C^\text {II}=[C^\text {II}_1,C^\text {II}_2,\ldots ,C^\text {II}_m]\) and \(B^\text {II}=[B^\text {II,T}_1,B^\text {II,T}_2,\ldots ,B^\text {II,T}_m]^\text {T}\).

Definition 1

For the linear system (ABC) in (11), its pole-residue realization is defined as \((\Lambda ,B^\text {II},C^\text {II})\) in (25), namely

$$\begin{aligned} (sI-\Lambda )x&= B^\text {II},\nonumber \\ y&=C^\text {II} x. \end{aligned}$$

The pole-matching method

The pole-residue realization is a natural choice for the interpolation of ROMs because of the following theorem.

Theorem 1

Assume that \((\Lambda ^{(i)},B^{\text {II},(i)},C^{\text {II},(i)})\) (\(i=1,2,\ldots ,n_p\)) are ROMs in the pole-residue realization built for \(p_i\), respectively, all of which are of the same dimension k. Assume further that the block structures of \(\Lambda ^{(i)}\) are the same, i.e., for any \(1 \le j \le m\), the size of \(\Lambda ^{(i)}\) are the same for all i’s. Then, interpolating the matrices \((\Lambda ^{(i)},B^{\text {II},(i)},C^{\text {II},(i)})\) (for the parameter p) is equivalent to interpolating the positions and residues of each pole, respectively.


This is an apparent result from construction since in the pole-residue realization, the positions and residues of the poles are directly stored in the matrix \(\Lambda \) and \(C^\text {II}\), respectively, for both real eigenvalues and conjugate complex eigenvalues, as is shown in Eqs. (14), (15), (21), (22), (23) and (24). \(\square \)

Remark 2

This theorem shows that when we interpolate the pole-residue realizations \((\Lambda ^{(i)},B^{\text {II},(i)},C^{\text {II},(i)})\), we interpolate the positions and residues of the poles, which are values intrinsic to the FRFs themselves. By contrast, interpolating systems of the realization \((\Lambda ^{(i)},B^{\text {I},(i)},C^{\text {I},(i)})\), where we only assume that \(\Lambda ^{(i)}\) takes the form of (13) and impose no requirement on \(B^{\text {I},(i)}\) and \(C^{\text {I},(i)}\), may introduce additional degrees of freedom introduced by realization. For example, assume that \((\Lambda ^{(i)},B^{\text {I},(i)},C^{\text {I},(i)})\) (\(i=1,2\)) are two ROMs of dimension 3 describing the same system at the same parameter value with different realizations: \(\Lambda ^{(1)}=\Lambda ^{(2)}=\text {diag}\{\lambda _1,\lambda _2,\lambda _3\}\), (\(\lambda _1\), \(\lambda _2\) and \(\lambda _3\) are three different real numbers), \(B^{\text {I},(1)}=[16,2,1]^\text {T}\), \(B^{\text {I},(2)}=[4,4,4]^\text {T}\), \(C^{\text {I},(1)}=[1,8,16]\), \(C^{\text {I},(1)}=[4,4,4]\). These two ROMs have the same FRF due to (22) and the residues for all poles are 16. A critical property of a sound interpolation-based framework is that, when we interpolate two equivalent systems at the same parameter value, the interpolated system should be equivalent to these two systems no matter what interpolation method we use. However, if we interpolate them linearly with the equal weight 0.5, we get \(\Lambda ^{(3)}=\text {diag}\{\lambda _1,\lambda _2,\lambda _3\}\), \(B^{\text {I},(3)}=[10,3,2.5]^\text {T}\), \(C^{\text {I},(3)}=[2.5,6,10]\), and the residues of the three poles become 25, 18 and 25, respectively, which are all wrong. Therefore, the motivation to define the pole-residue realization is to remove the additional degrees of freedom, which may give rise to spurious results, by “modifying” \(C^\text {I}\) and \(B^\text {I}\) to \(C^\text {II}\) and \(B^\text {II}\).

Remark 3

The computational cost of the eigendecomposition used in computing the pole-amplitude realization is low since we apply it to a ROM, which is typically of order \(\mathcal {O}(10)\), in most cases less than 100. Therefore, considering the computing cost alone, any eigendecomposition method can be used. For numerical issues, we refer to “On defective eigenvalues” section because we will discuss general cases there without making the assumption that all eigenvalues of A are simple.

Remark 4

Although we have employed the eigendecomposition to compute the pole-amplitude realization like in modal analysis, the eigenmodes of the system may not be well approximated. Nevertheless, a ROM will not accurately capture the dynamics of the FOM unless it captures the dominant eigenmodes well enough, so usually, the dominant eigenmodes are captured quite well in a good ROM. Since our starting point is the ROM rather than the FOM, our full focus is on the input–output behavior of the system. Therefore, the computation of the modes of the system, which depends on all state variables of the FOM, is beyond our consideration. If eigenmodes are to be preserved, then it would be suggested to compute the ROMs at \(p_i\) using modal truncation [26] so that at \(p_i\), the local ROMs contain the exact dominant eigenmodes. By our interpolation approach, the eigenmodes at non-sampling parameters are then approximated by the interpolated poles so that we expect good approximation up to the unavoidable interpolation error.

On the storage and interpolation of ROMs

Besides the state-space representation, the pole-residue realization can be stored and interpolated in a more efficient scheme.

  • For a real eigenpair \((\lambda _j, v_j)\), we only need to store two real numbers: \(\lambda _j\) and \(C^\text {II}_j=C^\text {I}_j B^\text {I}_j\) because \(B^\text {II}_j\equiv 1\) does not need to be stored.

  • For a complex eigenpair \((a_j \pm \imath b_j, r_j \pm \imath q_j)\), we only need to store four real numbers: \(a_j\), \(b_j\), \(C^\text {II}_{j,1}=C^\text {I}_{j,1}B^\text {I}_{j,1}+ C^\text {I}_{j,2}B^\text {I}_{j,2}\) and \(C^\text {II}_{j,2}=C^\text {I}_{j,2}B^\text {I}_{j,1} - C^\text {I}_{j,1}B^\text {I}_{j,2}\) because \(B^\text {II}_j \equiv [1,\, 0]^\text {T}\) does not need to be stored.

Therefore, for an order-k ROM in the pole-residue realization, the storage is only 2k: the vector \((\lambda _j, C^\text {II}_j)\) for a real eigenvalue and \((a_j, b_j, C^\text {II}_{j,1},C^\text {II}_{j,2})\) for two conjugate complex eigenvalues.

Assuming that we have \(n_s\) real eigenvalues and \(n_d\) pairs of conjugate complex eigenvalues, we store the ROM with two matrices:

$$\begin{aligned} D \in \mathbb {C}^{n_d \times 4} \quad \text {and} \quad S\in \mathbb {C}^{n_s \times 2}, \end{aligned}$$

where each row of D stores a vector of the form \((a_j, b_j, C^\text {II}_{j,1},C^\text {II}_{j,2})\) and each row of S stores a vector of the form \((\lambda _j, C^\text {II}_j)\).

Besides the storage efficiency, this storage scheme also has the following advantages:

  • The order of eigenvalues can be easily rearranged, which provides us great flexibility in the pole-matching process.

  • An unimportant pole can be easily removed, e.g., using the concept of pole dominance [27]. This can be useful when we want to interpolate ROMs of different orders: for example, when a pole in one ROM cannot be matched to any pole of the other ROM and it is of low dominance, it can be removed in the interpolation process.

  • It can be easily written back into the state-space representation.

Under the assumption that the matrix A is nonsingular and all its eigenvalues are simple, the pole-matching process to evaluate the ROM at \(p_* \not \in \{p_1,p_2,\ldots ,p_{n_p}\}\) works as follows:

  1. 1.

    Given \(n_p\) ROMs built at \(p_1, p_2, \ldots , p_{n_p}\), we first convert all these ROMs into the pole-residue representation \(\{D^{(i)}, S^{(i)}\}\).

  2. 2.

    To get the ROM for \(p_*\), we first choose an interpolation algorithm and accordingly, the pre-computed ROMs built at p’s near \(p_*\). For simplicity of presentation, we assume that \(p_1\) and \(p_2\) are chosen for the interpolation.

  3. 3.

    Match the positions and residues of the poles by matching the rows of \(D^{(1)}\) and \(D^{(2)}\), and the rows of \(S^{(1)}\) and \(S^{(2)}\), respectively. Denote the models after pole matching by \(D^{(1)}_M\), \(D^{(2)}_M\), \(S^{(1)}_M\) and \(S^{(2)}_M\).

  4. 4

    Interpolate \(D^{(1)}_M\) at \(p_1\) and \(D^{(2)}_M\) at \(p_2\) to get \(D_*\) at \(p_*\). Similarly, interpolate \(S^{(1)}_M\) and \(S^{(2)}_M\) to get \(S_*\). The interpolated model at \(p_*\) is \(\{D_*, S_*\}\).

The procedure above is only a general description. For example, we can use different criteria to match the poles. Here we give some examples.

  1. 1.

    The simplest method is to sort the rows of D and S according to their real parts or imaginary parts.

  2. 2.

    Another choice is to match the closest poles when all poles only move slightly between the two models.

  3. 3.

    We can also compute a local “merit function” for each pairing of two individual poles, one of \(\{D^{(1)}, S^{(1)}\}\) and the other of \(\{D^{(2)}, S^{(2)}\}\), i.e., a weighted sum of the distance between poles and the difference in the residue, and match the poles according to it. More specifically, to find the matched pole in the second ROM for a given pole in the first ROM, for a real pole \((\lambda _j^{(1)}, C^{\text {II},(1)}_j)\), we solve the optimization problem

    $$\begin{aligned} \min _{i \in \{ i | m_i=1\}} \left| \lambda _j^{(1)} - \lambda _i^{(2)}\right| + w \left| C^{\text {II},(1)}_j-C^{\text {II},(2)}_i\right| , \end{aligned}$$

    while for a conjugate complex eigenpairs \((a_j^{(1)}, b_j^{(1)}, C^{\text {II},(1)}_{j,1},C^{\text {II},(1)}_{j,2})\), we solve

    $$\begin{aligned} \min _{i \in \{ i | m_i=2\}} \left| |a_j^{(1)}| - |a_i^{(2)}|\right| + \left| |b_j^{(1)}| - |b_i^{(2)}|\right| + w \left\| C^{\text {II},(1)}_{j}-C^{\text {II},(2)}_{i}\right\| \end{aligned}$$

    where w is a positive real number for weighting.

  4. 4.

    A global “merit function” can also be used, in which case the sum of the local “merit functions” is minimized. Suppose that the two ROMs have the same numbers of real poles and complex poles, respectively. We fix the order of the first ROM and represent the order of the poles in the second ROM after pole-matching by the vector \(\nu \), which is a permutation of \((1,2,\ldots ,n_d+n_s)\). Then we solve the following optimization problem to find \(\nu \):

    $$\begin{aligned} \min _{\nu }&\sum _{ \begin{array}{c} j\in \{ j | m_j=1 \} \\ i \in \{ i | m_{\nu _i}=1 \} \end{array}} \left| \lambda _j^{(1)} - \lambda _{\nu _i}^{(2)}\right| + w \left| C^{\text {II},(1)}_j-C^{\text {II},(2)}_{\nu _i}\right| \nonumber \\&+ \sum _{ \begin{array}{c} j\in \{ j | m_j=2 \} \\ i \in \{ i | m_{\nu _i}=2 \} \end{array}}\left| |a_j^{(1)}| - |a_{\nu _i}^{(2)}|\right| + \left| |b_j^{(1)}| - |b_{\nu _i}^{(2)}|\right| + w \left\| C^{\text {II},(1)}_{j}-C^{\text {II},(2)}_{{\nu _i}}\right\| . \end{aligned}$$

Note that all these methods have limitations. The first three methods may result in conflicts in pole-matching, a pole of one ROM is matched to multiple poles of the other ROM. Since the poles move when the parameters change, it can happen that \(\lambda _1(p_1) \approx \lambda _2(p_2)\), \(\lambda _1(p_2) \approx \lambda _2(p_1)\), and \(\lambda _1(p_1)\) is far from \(\lambda _1(p_2)\), which we call pole-crossing. If we simply match \(\lambda _1(p_1)\) with \(\lambda _2(p_2)\), and \(\lambda _1(p_2)\) with \(\lambda _2(p_1)\), we lose the true parametric dynamics of the problem. Pole crossing cannot be captured by the first and second methods, and the third and fourth method can also fail. But a trial and error method in pole-matching can always be tried if the engineer is able to tell whether the interpolated ROM is physically sound. When we are capable to build ROM ourselves, e.g., we have access to the FOM, we can build ROMs at more samples to better exploit the parameter space. An offline-online method can be developed to overcome these difficulties, but that is future work.

However, the current paper focuses on the cases where the ROMs are given and the following requirements are fulfilled:

  1. 1.

    The given ROMs are accurate enough. Otherwise, we may not be able to match the poles even when the change of parameters is very small.

  2. 2.

    Sufficiently many ROMs are provided to represent the parametric dynamics of the system. To compute the ROM at \(p_*\), the poles of the ROMs chosen for interpolation should not change too much. Otherwise, pole matching is difficult due to the lack of data.

Practical considerations

For simplicity of presentation, the discussion above assumed that the matrix A in (11) is real with simple eigenvalues. Now we extend the method to more general cases.

The case of complex A

Assume that A in (12) is a complex matrix with simple eigenvalues. Now in general, the complex eigenvalues occur no longer in conjugate pairs. Therefore, we simply conduct the complex eigenvalue decomposition to diagonalize A. The following computational procedure and storage scheme are the same as in the case of real A with all eigenvalues real. Note that when A is complex, the “residues” stored in \(C^\text {II}_j\) are also complex numbers in general. Therefore, we need to interpolate both the positions and the “complex residues” of the poles.

On semisimple eigenvalues

Assume that the dynamical system (11) has a semisimple eigenvalue \(\lambda _j\) with multiplicity \(m_j\). Then \(\Lambda _j\) in (13) is an \(m_j \times m_j\) diagonal matrix with all diagonal elements \(\lambda _j\), and \(C^\text {I}\) and \(B^\text {I}\) are row vector and column vector of length \(m_j\), respectively. Its corresponding contribution in the sum (20) is

$$\begin{aligned}&\left[ C^\text {I}_{j,1}, C^\text {I}_{j,2}, \ldots , C^\text {I}_{j,m_j} \right] \left[ \begin{array}{cccc} \lambda _j \\ &{} \lambda _j \\ &{}&{} \ddots \\ &{}&{}&{} \lambda _j \end{array} \right] \left[ \begin{array}{c} B^\text {I}_{j,1}\\ B^\text {I}_{j,2}\\ \vdots \\ B^\text {I}_{j,m_j}\\ \end{array} \right] \nonumber \\&\quad =\left[ \sum _{i}^{m_j} C^\text {I}_{j,i}B^\text {I}_{j,i} \right] \big [ \lambda _j \big ] \big [1 \big ] \nonumber \\&\quad {\mathop {=}\limits ^{\triangle }} C^\text {II}_j \Lambda ^\text {II}_j B^\text {II}_j. \end{aligned}$$

As this derivation shows, a simple solution is to define \(\displaystyle C^\text {II}_j=\sum \nolimits _{i}^{m_j} C^\text {I}_{j,i}B^\text {I}_{j,i}\), \(\Lambda ^\text {II}_j=\big [ \lambda _j \big ]\), \(B^\text {II}_j=1\) and treat it as if it were a simple pole, i.e., we need only to store \((\lambda _j, C^\text {II}_j)\) since \(B^\text {II}_j\equiv 1\).

However, in reality, the simple strategy does not always work. Consider a parametric system with two poles, one pole with multiplicity 2 for any parameter value and the other pole normally simple. If the two parametric poles coincide at the parameter value \(p_*\), the simple procedure above just gives one pole with multiplicity 3, which introduces difficulty in interpolation with ROMs built at other p values, which have two poles. Therefore, we must “separate” the two poles even they happen to be at the same position. This problem will be discussed in more detail in our future work.

On defective eigenvalues

When A has defective eigenvalue(s), the dynamical system (11) cannot be written into the proposed pole-residue realization because the eigenvalue decomposition (16) no longer exists. In this case, A is similar to a Jordan matrix \(J = P^{-1}AP\), the numerical computation of which is highly unstable [25]. Although in practical computations, defective eigenvalues rarely occur, especially for a ROM, due to numerical noise, a nearly defective eigenvalue can also lead to P having a very large condition number, which may cause numerical instability, e.g., in the computation of \(B^\text {I} = P^{-1}B\) in (19). Therefore, in practical computations, we always check the condition number of P, which can be computed by, e.g., the MATLAB function “condeig”. When it is very large, the algorithm breaks and fails. The solution of this problem is future work.

The pole-matching method for MIMO systems

Now we generalize the pole-matching PMOR method to MIMO systems. When \(r_I>1\) and/or \(r_O>1\) and all eigenvalues of \(A\in \mathbb {R}^{k \times k}\) are simple, our derivation in “The pole-residue realization for SISO systems” section holds until (19) with \(C^\text {I}\in \mathbb {R}^{r_O \times k}\), \(B^\text {I}\in \mathbb {R}^{k \times r_I}\), \(C^\text {I}_j\in \mathbb {R}^{r_O \times m_j}\), and \(B^\text {I}_j\in \mathbb {R}^{m_j \times r_I}\).

Eigenpair \((\lambda _j, v_j)\) with \(m_j=1\)

To study an individual term \(C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j\) in (20), let f denote the index of the first non-zero entry of \(B^\text {I}_j\) (which must exist, since otherwise the pole can be removed), and define

$$\begin{aligned}&C^\text {II}_{j,i}=C^\text {I}_{j,i} B^\text {I}_{j,f}, \,&\qquad C^\text {II}_j=[C^\text {II}_{j,1},C^\text {II}_{j,2},\ldots ,C^\text {II}_{j,r_O}]^\text {T}, \nonumber \\&C^\text {II}=[C^\text {II}_{1},C^\text {II}_{2},\ldots ,C^\text {II}_{k}],\,&\qquad B^\text {II}_{j,f}=1, \qquad B^\text {II}_{j,i}=\frac{B^\text {I}_{j,i}}{B^\text {I}_{j,f}}\,\,\,\, (\forall j \ne f), \nonumber \\&B^\text {II}_j=[B^\text {II}_{j,1} ,B^\text {II}_{j,2},\ldots ,B^\text {II}_{j,r_I}],\,&\qquad B^\text {II}=[B^\text {II}_{1} ,B^\text {II}_{2},\ldots ,B^\text {II}_{k}]^\text {T}. \end{aligned}$$

The contribution of \((\lambda _j, v_j)\) in the weighted sum (20) is

$$\begin{aligned} C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j&=\frac{C^\text {I}_j B^\text {I}_j}{s-\lambda _j}\nonumber \\&= \frac{[C^\text {I}_{j,1}, C^\text {I}_{j,2}, \ldots , C^\text {I}_{j,r_O}]^\text {T}\cdot [B^\text {I}_{j,1}, B^\text {I}_{j,2}, \ldots , B^\text {I}_{j,r_I}]}{s-\lambda _j}\nonumber \\&=\frac{[C^\text {II}_{j,1}, C^\text {II}_{j,2}, \ldots , C^\text {II}_{j,r_O}]^\text {T}\cdot [B^\text {II}_{j,1}, B^\text {II}_{j,2}, \ldots , B^\text {II}_{j,r_I}]}{s-\lambda _j}\nonumber \\&=C^\text {II}_j (s I - \Lambda _j )^{-1} B^\text {II}_j. \end{aligned}$$

Remark 5

As we have discussed in Remark 2, \(C^\text {I}_j\) and \(B^\text {I}_j\) are not uniquely defined for a given FRF because of the realization freedom. However, the positions and residues of the poles depend only on the FRF. In the case above, the position of the pole is \(\lambda _i\) and the residues of the MIMO system are the entries of the matrix

$$\begin{aligned} \left[ \begin{array}{ccc} C^\text {I}_{j,1} B^\text {I}_{j,1} &{} \ldots &{} C^\text {I}_{j,1} B^\text {I}_{j,r_I} \\ \vdots &{} \ddots &{} \vdots \\ C^\text {I}_{j,r_O} B^\text {I}_{j,1} &{} \ldots &{} C^\text {I}_{j,r_O} B^\text {I}_{j,r_I} \end{array} \right] . \end{aligned}$$

Therefore, all entries of \(C^\text {II}_j\) are determined by the residues of the poles. Actually, all entries of \(B^\text {II}_j\) are also determined by the residues of the poles because

$$\begin{aligned} B^\text {II}_{j,i}=\frac{B^\text {I}_{j,i}}{B^\text {I}_{j,f}}=\frac{C^\text {I}_{j,g}B^\text {I}_{j,i}}{C^\text {I}_{j,g} B^\text {I}_{j,f}}, \end{aligned}$$

where \(C^\text {I}_{j,g}\) is any nonzero entry for \(C^\text {I}_{j}\). However, \(B^\text {II}_{j,i}\) cannot be used for interpolation purposes. For example, if we interpolate a ROM built for \(p_1\) and another ROM built for \(p_2\) with their weights \(\omega (p)\) and \((1-\omega (p))\), respectively, we should compute \(B^\text {II}_{j,i}\) by

$$\begin{aligned} B^\text {II}_{j,i}(p)=\frac{w(p)C^\text {I}_{j,g}(p_1)B^\text {I}_{j,i}(p_1)+(1-w(p))C^\text {I}_{j,g}(p_2)B^\text {I}_{j,i}(p_2)}{w(p)C^\text {I}_{j,g}(p_1) B^\text {I}_{j,f}(p_1) + (1-w(p)) C^\text {I}_{j,g}(p_2) B^\text {I}_{j,f}(p_2)}, \end{aligned}$$

which first estimates the residues of the poles by interpolation and then compute \(B^\text {II}_{j,i}\) with these residues, rather than by

$$\begin{aligned} B^\text {II}_{j,i}(p)= w(p) \frac{B^\text {I}_{j,i}(p_1)}{B^\text {I}_{j,f}(p_1)}+ (1-w(p)) \frac{B^\text {I}_{j,i}(p_2)}{B^\text {I}_{j,f}(p_2)}, \end{aligned}$$

which loses the connection with the residues of poles at p. Therefore, it is insufficient to only store \(B^\text {II}_j\) for interpolation purposes. We need to store

$$\begin{aligned} B^\text {II,U}_j=\left[ C^\text {I}_{j,g}B^\text {I}_{j,1},C^\text {I}_{j,g}B^\text {I}_{j,2},\ldots ,C^\text {I}_{j,g}B^\text {I}_{j,r_I}\right] ,\quad b^\text {II,L}_j=C^\text {I}_{j,g} B^\text {I}_{j,f} \end{aligned}$$

and compute \(B^\text {II}_j\) as

$$\begin{aligned} B^\text {II}_j=\frac{1}{b^\text {II,L}_j}B^\text {II,U}_j. \end{aligned}$$

The pole-residue realization for MIMO ROMs

For the complex eigenpairs \((a_j \pm \imath b_j, r_j \pm \imath q_j)\) with \(m_j=2\), it is difficult to derive

$$\begin{aligned} C^\text {I}_j (s I - \Lambda _j )^{-1} B^\text {I}_j=C^\text {II}_j (s I - \Lambda _j )^{-1} B^\text {II}_j, \end{aligned}$$

where \(C^\text {II}_j\in \mathbb {R}^{r_O \times 2}\) and \(B^\text {II}_j\in \mathbb {R}^{2 \times r_I}\) with all their entries either constants or the residues of the poles. This requires solving a system of nonlinear equations under constraints, which is difficult. Actually, we do not even know whether the solution exists in general. The research of this topic is future work.

Therefore, we propose two methods to obtain the pole-residue realization for MIMO ROMs.

The pole-residue realization for MIMO ROMs in the complex form When it is not required to preserve real realization for real systems, a pole-residue realization can be easily computed if all eigenvalues are simple. We substitute the similarity transformation (17) with a true eigendecomposition with \(\Lambda \) diagonal rather than block diagonal. Therefore, we can apply the method developed in “Eigenpair \((\lambda _j, v_j)\) with \(m_j=1\)” section to all the eigenvalues to compute the pole-residue realization.

An advantage of this method is that the MIMO structure is strictly preserved: the sizes of \(\Lambda \), \(B^\text {II}\) and \(C^\text {II}\) equal those of A, B and C, respectively. However, for a real system with conjugate complex eigenpairs, complex numbers are introduced in the pole-residue realization.

The pole-residue realization for MIMO ROMs in the real form To derive the pole-residue realization for MIMO ROMs in the real form, we first consider the SIMO (single-input multiple-output) case.

The SIMO case For a SIMO system (12)

with \(C\in \mathbb {R}^{r_O \times k}\), \(B\in \mathbb {R}^{k \times 1}\), we denote

$$\begin{aligned} C=[C[1]^\text {T},C[2]^\text {T},\ldots ,C[r_O]^\text {T}]^\text {T}, \end{aligned}$$

where C[i] represents the i-th row of C. Then, for each row of C, say C[i], we compute the pole-residue realization for the SISO system (ABC(i,  : )), which we denote by \((\Lambda [i],B^\text {II}[i],C^\text {II}[i])\).

Note that

$$\begin{aligned}&\Lambda [1]=\Lambda [2]=\ldots =\Lambda [r_O]=\Lambda \nonumber \\ \text {and}\qquad&B^\text {II}[1]=B^\text {II}[2]=\ldots =B^\text {II}[r_O]=B^\text {II} \end{aligned}$$

hold because

  • Despite the different output vector C(i,  : ), the similarity transformation (17) is the same because for all these SISO systems, the positions of all poles are the same, respectively. Therefore, \(\Lambda [i]=\Lambda \).

  • According to (21) and (24), \(B^\text {II}[i]\) depends completely on the structure of \(\Lambda [i]\). Since \(\Lambda [i]=\Lambda \), \(B^\text {II}[i]=B^\text {II}\) also holds.

Due to the property (39), the pole-residue realization for the SIMO system is \((\Lambda ,B^\text {II},C^\text {II})\) with

$$\begin{aligned} C^\text {II}=\big [C^\text {II,T}[1], C^\text {II,T}[2],\ldots ,C^\text {II,T}[r_O]\big ]^\text {T}. \end{aligned}$$

The MIMO case For a MIMO system (12) with \(C \in \mathbb {R}^{r_O \times k}\), \(B\in \mathbb {R}^{k \times r_I}\), we denote

$$\begin{aligned} B=\big [B[1],B[2],\ldots ,B[r_I]\big ], \end{aligned}$$

where B[i] denotes the i-th column of B.

We first compute the pole-residue realization for each SIMO system (AB[i], C), which we denote by \((\Lambda ,B^\text {II},C^\text {II}\{i\})\). Due to a similar argument as that for (39), we deduce that \(\Lambda \) and \(B^\text {II}\) does not change with i. However, \(C^\text {II}\{i\}\) does change with i because under different input vectors and output vectors, the residues of each pole are different in general.

Using the method proposed in [28], which reformulates the MIMO systems as parallel connection of split systems, the pole-residue realization of the MIMO system (12) in the real form is

$$\begin{aligned} {\varvec{\Lambda }}&=\text {diag}\big [\Lambda , \Lambda , \ldots , \Lambda \big ], \nonumber \\ \mathbf {B}^\text {II}&=\text {diag}\big [B^\text {II},B^\text {II},\ldots ,B^\text {II}\big ],\nonumber \\ \mathbf {C}^\text {II}&=\big [C^\text {II}\{1\},C^\text {II}\{2\},\ldots ,C^\text {II}\{r_I\}\big ]. \end{aligned}$$

Although this realization preserves the MIMO structure with real algorithms, the dimension of the ROM is multiplied by the number of inputs. In practical computations of FRFs, however, we do not need to formulate (42) explicitly. We compute the FRF column-wise, i.e., to compute the i-th column of the FRF, we only need to formulate \((\Lambda ,B^\text {II},C^\text {II}\{i\})\) for (AB[i], C).

On the storage and interpolation of MIMO ROMs In the MIMO case, the FRF H(s) is a matrix function with \(r_O \times r_I\) entries. We denote the function for the (jk)-th entry by \(H_{j,k}(s)\). All these functions have the same poles, which need to be stored only once. Normally, they have different residues for the poles, which needs to be stored individually. To conduct ROM interpolation, we conduct pole matching and pole interpolation using similar methods as discussed for the SISO case. A major difference is that now we have more residue information, which we can use to construct the merit function. With the positions and residues of all poles for all FRF entries, an interpolated ROM can be constructed. The previous two sections actually give two procedures that construct a ROM from the positions and residues of all poles, one for a complex realization and the other for a real realization.

For the complex realization discussed in “The pole-residue realization for MIMO ROMs in the complex form” section, we only need to store the diagonal elements of \(\Lambda \) and the matrices \(C^\text {II}\) in (31), along with \(B^\text {II,U}_j\) and \(b^\text {II,L}_j\) (\(j=1,2,\ldots , k\)) in (35), a total of a total of \(k \times (r_O+r_I+2)\) complex numbers. When we conduct interpolation using this realization after pole-matching, we also need to check that the indices g and f in (35) are the same for each j, respectively.

For the real realization form discussed in “The pole-residue realization for MIMO ROMs in the real form” section, we need to store the positions of all poles and \(\mathbf {C}^\text {II}\) defined in (42), a total of \(k \times (r_O\times r_I+1)\) real numbers.

Interpolatory PMOR in the Loewner realization

In this section, we propose a method that interpolates ROMs built by the Loewner framework to approximate the dynamical system described by the FRF H(sp). We will deal with the interpolation of Loewner ROMs in the original form in “Interpolating Loewner ROMs in the original form” section, and the interpolation of Loewner ROMs in the compressed form in “Interpolating Loewner ROMs in the compressed form” section. Both methods rely on Assumption 1.

Assumption 1

We assume that for all sampled values for the parameter \(p_l\) (\(l=1,2,\ldots ,n_p\)), the same frequency samples and right/left tangential directions are used. Therefore, given the frequency shifts \(\mu _i\) and the corresponding left tangential direction \(\ell _i\) (\(i = 1, 2, \ldots N_\omega \)) a left sample of \(H(s,p_l)\) is defined by

$$\begin{aligned} (\mu _i,\ell _i,v_i(p_l)),\qquad \text {where}\ v_i(p_l)=\ell _i H(\mu _i,p_l). \end{aligned}$$

Similarly, given the frequency shifts \(\lambda _j\) and the corresponding right tangential direction \(r_j\) (\(j = 1,2,\ldots N_\omega \)), a right sample of \(H(s,p_l)\) is defined by

$$\begin{aligned} (\lambda _j,r_j,w_j(p_l)), \qquad \text {where} \ w_j(p_l)=H(\lambda _j,p_l)r_j. \end{aligned}$$

Under Assumption 1, the Loewner matrix and the shifted Loewner matrix at \(p_l\), which we denote by \(\mathbb {L}(p_l)\) and \(\mathbb {L}_\sigma (p_l)\), respectively, are defined by

$$\begin{aligned} \big [ \mathbb {L}(p_l) \big ]_{i\,j} =\frac{v_i(p_l)r_j - \ell _i w_j(p_l)}{\mu _i-\lambda _j}, \qquad \big [ \mathbb {L}_\sigma (p_l) \big ]_{i\,j} =\frac{\mu _i v_i(p_l)r_j - \ell _i w_j(p_l)\lambda _j}{\mu _i-\lambda _j}. \end{aligned}$$

Interpolating Loewner ROMs in the original form

The following theorem shows the physical meaning of interpolating Loewner ROMs in the original form.

Theorem 2

Assume that \((E_l,A_l,B_l,C_l)\) (\(l=1,2,\ldots n_p\)) are ROMs built at \(p_l\) by the Loewner framework in the original form with the left/right triplet samples \((\mu _i,\ell _i,v_i(p_l))\) and \((\lambda _j,r_j,w_j(p_l))\), which satisfy Assumption 1. Then, the following two pROMs built for the parameter value p are equal:

  1. a.

    The pROM \((E_\text {a}(p),A_\text {a}(p),B_\text {a}(p),C_\text {a}(p))\) obtained by applying an arbitrary interpolation operator of the form

    $$\begin{aligned} M(p)=\sum _{l=1}^{n_p} M_l \phi _l (p), \qquad (\phi _i(p_j)=\delta _{ij}) \end{aligned}$$

    to each of E, A, B and C.

  2. b.

    The pROM \((E_\text {b}(p),A_\text {b}(p),B_\text {b}(p),C_\text {b}(p))\) built by the Loewner framework in the original form using the “interpolated left/right data” at p, which is obtained by using the interpolation operator (46) on the left/right samples of the FRF:

    $$\begin{aligned} \left( \mu _i, \ell _i, \sum _{i=1}^{n_p} v_i(p_l)\phi _l(p)\right) \quad \text {and} \quad \left( \lambda _j, r_j, \sum _{l=1}^{n_p}w_j(p_l)\phi _l(p)\right) . \end{aligned}$$

A proof for Theorem 2 is given in Appendix A.

Theorem 2 shows that interpolating Loewner ROMs in the original form results in a pROM that is “optimal” from the perspective of the left/right triplet samples. However, this method is practically unsatisfactory as it needs more memory storage than the original left/right triplet samples. Therefore, our ultimate goal is to interpolate Loewner ROMs in the compressed form.

Interpolating Loewner ROMs in the compressed form

To study the interpolation of Loewner ROMs in the compressed form, we first parameterize Eq. (9) by denoting the (truncated-)SVD at the parameter value \(p_l\) by

$$\begin{aligned} s_l\mathbb {L}_l-\mathbb {L}_{\sigma l}&=Y_l \Sigma _l X_l^* \approx Y_{l,k} \Sigma _{l,k} X_{l,k}^*, \qquad s_l \in \{ \lambda _{l,i} \} \cup \{ \mu _{l,j} \}, \nonumber \\ V_l&=[v_{l,1},v_{l,2},\ldots ,v_{l,n_L}], \qquad W_l=[w_{l,1},w_{l,2},\ldots ,w_{l,n_R}]. \end{aligned}$$

Proposition 1

The matrices \(X_l\) and \(Y_l\) defined in (48) are generalized controllability and observability matrices of the system (4) at \(p_l\), respectively.

For a proof of Proposition 1, we refer to Theorem 5.2 in [15]. Therefore, we can compress the ROM in the original representation by ignoring the hardly controllable and observable vectors from \(X_l\) and \(Y_l\), i.e., taking the first k dominant columns of \(X_l\) and \(Y_l\), respectively, and then projecting the state vector to the range of \(X_{l,k}\) and the dual state vector to the range of \(Y_{l,k}\). This procedure leads to a Loewner ROM in the compressed form as in (10). Now we propose Algorithm 1 for generating Loewner ROMs in the compressed form that can directly be used for interpolation.


Theorem 3

For any index l, the controllability matrix satisfies

$$\begin{aligned} \text {rowspan}\left\{ X_l \right\} \subseteq \text {rowspan}\left\{ X \right\} \end{aligned}$$

and the observability matrix satisfies

$$\begin{aligned} \text {colspan}\left\{ Y_l \right\} \subseteq \text {colspan}\left\{ Y \right\} , \end{aligned}$$

where X, Y are defined in (50) and (49), respectively.

A proof for Theorem 3 is given in Appendix A. According to Theorem 3, if we truncate X an Y to eliminate the hardly controllable and observable subspaces, respectively, all ROMs for \(l = 1, 2, \ldots , n_p\) are accurately approximated by (51).

Remark 6

The Loewner pROM in the compressed form (51) is a good approximation of the Loewner pROM in the original form (46), as long as K is large enough so that \(X_K\) and \(Y_K\) capture all dominant components of X and Y. This is because

  • The pROM (51) is actually obtained by applying the projection method with global bases to the pROM (46).

  • At each interpolation point \(p_l\), the controllability and the observability is captured well by (51) according to Theorem 3, i.e., the Loewner pROM (51) in the compressed form interpolates the Loewner ROMs in the original form well.


In this section, we apply the developed methods to three applications: a microthruster model [19], a “FOM” model [21], and a footbridge model [29]. We compare three methods: the pole-matching method, interpolatory PMOR in the Loewner realization, and the interpolation of ROMs on the nonsingular matrix manifold [12].

PMOR on the microthruster model

In this section, we study the performance of the proposed methods on data-driven ROMs in the frequency domain. The FRFs used to compute the data-driven ROMs are generated by the microthruster model [19]. A schematic diagram of the microthruster is shown in Fig. 1. The microthruster model is of the form (4) with order \(n = 4257\) and has a single parameter: the film coefficient.

First in Fig. 2, we show the convergence of nonparametric Loewner ROM in the compressed realization, which is generated with 100 samples for \(\lambda \) and 100 samples for \(\mu \). With the increase of the dimension, the FRF of the Loewner ROM becomes closer to that of the FOM. Therefore, the Loewner ROMs are suitable for our study of the interpolation of ROMs.

Fig. 1

A schematic 2D illustration of the microthruster

Fig. 2

The FRFs of Loewner ROMs and the FRF of the FOM at \(p = 268.3\)

Fig. 3

The behavior of ROMs built by interpolation on the nonsingular matrix manifold. a The ROM with \(k=10\) and \(p_{*}=65.51\). b The ROM with \(k=11\) and \(p_{*}=65.51\)

Then, we apply the three PMOR methods to the microthruster model.

  • The manifold method This method interpolates ROMs on the nonsingular matrix manifold [12]. In the numerical tests, we first build ROMs at the parameter samples \(p_1 = 10\), \(p_2 = 268.3\), \(p_3 = 7197\), based on which we use the manifold method to build the pROM. The FRF of the interpolated pROM at \(p_* = 65.51\) is shown in Fig. 3. The approximation quality improves as the order of the ROM k increases up to 10. However, when \(k > 10\), the approximation quality becomes unacceptable. Because of its unsatisfactory performance, we will not test this method any further in the next two numerical examples.

  • Interpolation of ROMs in the Loewner realization (Algorithm 1) First, we interpolate the Loewner ROMs in the original form. Figure 4 shows that this method is much more accurate than the manifold method.

    Furthermore, the pROM is much more stable: we have never observed divergence as we increase the order of the pROM. Then, we interpolate the Loewner ROMs in the compressed form built by Algorithm 1. In Fig. 5, we show the numerical results when different shifts \(s \in \{\lambda _{i,l}\} \cup \{\mu _{j,l}\}\) (defined in (48)) are used. Within each sub-figure, the nonparametric ROMs used for interpolation are built for all \(p_l\)’s using the same shift s specified in the subtitle. It is shown that no matter what shift we use, the resulting ROM is accurate. As a reference, we show in Fig. 6 the numerical results for interpolating the Loewner ROMs in the compressed form using local bases (10) rather than using the global bases in Algorithm 1. The numerical results show that using individual bases rather than global bases in compressing the ROMs, we cannot obtain a pROM with high fidelity by interpolation. In Fig. 7a, we plot the response surface of the FOM along with the absolute error of the interpolated ROMs generated by Algorithm 1. The figure plots FRFs for 29 samples for the parameter p: \(p_1\), \(p_2\), ..., \(p_{29}\). The samples \(p_1\), \(p_8\), \(p_{15}\), \(p_{22}\) and \(p_{29}\) are used to build the global bases, and the FRFs at all other p’s are obtained by an interpolated ROM generated by Algorithm 1. In Fig. 7b, we show that using a more advanced spline interpolation, higher accuracy can be achieved.

  • Interpolation of ROMs in the pole-residue realization To apply the interpolation method based on the pole-residue realization, we first study the pole-matching criterion. We use the criterion that the overall differences in pole positions and differences in pole residues are minimal, which agrees with the intuition on the optimal pole-matching solution. Figure 8 shows the poles of the ROMs for \(p_{22}\) and \(p_{29}\), respectively.

    In this example, the ROMs are of the form (4) with complex E, A, B and C. Since E is nonsingular, we left multiply the system by \(E^{-1}\), which is of the reduced dimension, to obtain the system of the form (11), for which the pole-residue realization is defined. In this example, the matrix A is complex. To conduct PMOR, we first convert the ROMs (in the form of (11)) at \(p_1\), \(p_8\), \(p_{15}\), \(p_{22}\) and \(p_{29}\) to the pole-residue realization. Then, we conduct linear interpolation among the resulting ROMs. The result is shown in Fig. 7(c). Its accuracy is slightly better than the linear ROM interpolation of the Loewner representation.

Fig. 4

Interpolation of Loewner ROMs in the original form. The pROM with order \(k = 21\) is obtained by interpolating nonparametric Loewner ROMs in the original form at \(p_1=10\) and \(p_2=268.3\). The FRFs at \(p_* =65.51\) are shown for comparison

Fig. 5

Interpolation of Loewner ROMs built with global basis using Algorithm  1. The nonparametric Loewner ROMs are built in the compressed form with different frequency shift s. The FRFs at \(p_* =65.51\) are shown in the figure

Fig. 6

Interpolation of Loewner ROMs built with individual bases. The nonparametric Loewner ROMs are built in the compressed form with different frequency shift s and they are interpolated matrix-wise directly using (46). The FRFs at \(p_* =65.51\) are shown in the figure

Fig. 7

Response surface and the absolute error. a Linear interpolation of the Loewner representation. The overall relative error is \(2.4649 \times 10^{-2}\). b Spline interpolation of the Loewner representation. The overall relative error is \(6.0671 \times 10^{-3}\). c Linear interpolation of the pole-residue representation. The overall relative error is \(2.3485 \times 10^{-2}\)

Results on the parametric “FOM” model

Now we apply our method to the parametric “FOM” model presented in [21], which is adapted from the nonparametric “FOM” model in [30]:

$$\begin{aligned} (s \mathcal {I} -\mathcal {A}(p)) X(s,p)&= B u(s),\\ Y(s,p)&=\mathcal {C}X(s,p), \end{aligned}$$

where \(\mathcal {C}=[10, 10, 10, 10, 10, 10, 1, \ldots ,1]\), \(\mathcal {B}=\mathcal {C}^\text {T}\), and \(\mathcal {A}(p)=\text {diag}(\mathcal {A}_1(p),\mathcal {A}_2,\mathcal {A}_3,\mathcal {A}_4)\) with \(\mathcal {A}_4=-\text {diag}(1,2,\ldots ,1000)\),

$$\begin{aligned} \mathcal {A}_1(p)=\left[ \begin{array}{ll} -1 &{} p \\ -p &{} -1 \end{array} \right] ,\quad \mathcal {A}_2=\left[ \begin{array}{ll} -1 &{} 200\\ -200 &{} -1 \end{array} \right] ,\quad \mathcal {A}_3=\left[ \begin{array}{ll} -1 &{} 400 \\ -400 &{} -1 \end{array} \right] . \end{aligned}$$
Fig. 8

The poles of the ROMs for \(p_{22}\) and \(p_{29}\), respectively

We first use interpolate ROMs in the Loewner realization (Algorithm 1) to obtain a pROM. This example, however, shows the limitation of Algorithm 1 in dealing with peak(s) that moves significantly with the change of parameter(s). As was discussed in [7], when we interpolate FRFs, the positions of poles do not change. This is because the interpolated FRF

$$\begin{aligned} \sum _{i=1}^k w_i(p) C_i (s E_i - A_i)^{-1} B_i \end{aligned}$$

has a pole at any s, at which any of the individual FRFs

$$\begin{aligned} C_i (s E_i - A_i)^{-1} B_i \end{aligned}$$

has a pole. Therefore, the poles of the interpolated FRF are the union of the poles of the interpolating FRFs. This phenomenon is clearly shown in Fig. 9: with the change of the parameter, the pROM generated by Algorithm 1 does not capture the “moving peak”, which is the true dynamics, but rather evolves by waxing and waning of two fixed peaks, which also interpolates the two FRFs but seldom occurs in real applications.

Remark 7

We note the research efforts in avoiding the problem of “fixed peaks” when we interpolate the FRFs. For example, a pair of scaling parameters were introduced in [31]. The method works well when all peaks move in the same direction at a similar rate, which was proved by their numerical tests. In general, however, the method is insufficient to describe the movements of all poles because it actually only introduces one additional degree of freedom.

Fig. 9

Interpolating two ROMs built by the Loewner framework. The pROM is built by interpolating the ROMs built at \(p_1\) and \(p_7\)

Fig. 10

The poles of the ROM in the pole-residue realization generated by different MOR methods: the balanced trancation (BT) method and the data-driven method “ssest” in MATLAB

The ROM interpolation based on the pole-residue realization, on the contrary, is capable of capturing the moving peak because it interpolates the positions of the poles. In this example, we use two MOR methods to build the nonparametric ROMs at the parameter samples: a system identification method as a MOR method (the ssest function in MATLAB) [32], and a balanced truncation (BT) method (the balred function in MATLAB). In both cases, the two ROMs for interpolation are built at \(p = 10\) and \(p = 32.5\) with the dimension \(k = 10\). The positions of the poles of the ROMs at these two parameters are shown in Fig. 10.

In Fig. 11a, b, we interpolate ROMs built by balred and ssest, respectively. The figures show that in this case, balanced truncation achieves better overall accuracy than “ssest”. However, a more important observation is that, the errors at the interpolated parameter values are comparable to the errors at the interpolating parameter values. Therefore, the bigger error of the interpolated “ssest” pROM in the pole-residue realization results from the bigger error of the nonparametric “ssest” ROMs used for interpolation, rather than from interpolation itself. So in both cases, ROM interpolation based on the pole-residue realization gives satisfactory results.

Now we try to interpolate ROMs of different types. In Fig. 12a, we interpolate a BT ROM built at \(p = 10\) and an ssest ROM built at \(p = 32.5\). From Fig. 10, we can see that the non-dominant poles of the FOM are presented by significantly different poles of the ROM in the two different types of methods. However, their interpolation also gives accurate results as Fig. 12a shows.Footnote 1 Note that when we interpolate different types of ROMs, we should be particularly careful about pole-matching. If we skip the pole-matching procedure, we will get the result shown in Fig. 12b, which clearly presents the wrong evolution of peaks due to the interpolation of wrong poles.

Fig. 11

Interpolation of ROMs of the same type: response surface and the absolute error. a Interpolation of BT ROMs. b Interpolation of ssest ROMs

Fig. 12

Interpolation between ROMs of different types. a Interpolation between a BT ROM and an ssest ROM (with matched poles): response surfaces and the absolute errors. b Interpolation between a BT ROM and an ssest ROM (with mismatched poles): response surface

Fig. 13

The FRFs corresponding to the four interpolating points

Results for the footbridge model

In this section, we consider a large-scale footbridge model. The footbridge is located over the Dijle river in Mechelen (Belgium). It is about 31.354 m in length and a tuned mass damper is located in the center. The discretized footbridge model is

$$\begin{aligned} \left\{ \begin{array}{l} \Big ( \mathcal {K}_0 + i \omega \mathcal {C}_0 + (k_1 + i \omega c_1) \mathcal {K}_i -\omega ^2 \mathcal {M}_0 \Big ) X(\omega ,k_1,c_1) = \mathcal {F},\\ Y(\omega ,k_1,c_1)=\mathcal {L} X(\omega ,k_1,c_1), \end{array} \right. \end{aligned}$$

where \(\mathcal {K}_0\) and \(\mathcal {M}_0\) are obtained from a finite element model with 25,962 degrees of freedom, \(\mathcal {C}_0\) represents Rayleigh damping, \(\mathcal {K}_1\) is a matrix with four non-zero entries that represents the interaction between the tuned mass damper and the footbridge, the input vector \(\mathcal {F}\) represents a unit excitation at the center span, and the output vector \(\mathcal {L}\) picks out the vibration at the center span. The model has two parameters, the stiffness of the damper \(k_1\) and the damping coefficient of the damper \(c_1\). To reduce the model, we apply the Krylov subspace method [33, 34] on the first-order equivalent system

$$\begin{aligned} \left\{ \begin{array}{l} \left( \left[ \begin{array}{cc} \mathcal {K}_0 + k_1 \mathcal {K}_1 &{} 0 \\ 0 &{} \mathcal {I} \end{array} \right] + i \omega \left[ \begin{array}{cc} \mathcal {C}_0 + c_1 \mathcal {K}_1 &{} \mathcal {M}_0 \\ -\mathcal {I} &{} 0 \end{array} \right] \right) \left[ \begin{array}{c} X \\ i \omega X \end{array} \right] =\left[ \begin{array}{c} \mathcal {F} \\ 0 \end{array} \right] ,\\ y=[\mathcal {L} , 0] \left[ \begin{array}{c} X \\ i \omega X \end{array} \right] , \end{array} \right. \end{aligned}$$

to obtain ROMs of the form (4) and then, we left multiply the system by \(E^{-1}\) to obtain the system of the form (11). The order of ROMs is set to 10.

Fig. 14

The FRFs corresponding to the interpolated point

In this example, we use four points \((k_1,c_1)\) in the parameter space for interpolation: \((10{,}000\, \nicefrac {\mathrm{N}}{\mathrm{m}}, 20\, \nicefrac {\mathrm{Ns}}{\mathrm{m}})\), \((20{,}000\, \nicefrac {\mathrm{N}}{\mathrm{m}}, 20\, \nicefrac {\mathrm{Ns}}{\mathrm{m}})\), \((10{,}000\, \nicefrac {\mathrm{N}}{\mathrm{m}}, 50\, \nicefrac {\mathrm{Ns}}{\mathrm{m}})\) and \((20{,}000\, \nicefrac {\mathrm{N}}{\mathrm{m}}, 50\, \nicefrac {\mathrm{Ns}}{\mathrm{m}})\). The FRFs corresponding to these four points are shown in Fig. 13. Using these four points, we conduct a 2-dimensional linear interpolation (function “interp2” in MATLAB) based on the pole-residue representation to get the ROM for \((k_1, c_1) = (15{,}000\, \nicefrac {\mathrm{N}}{\mathrm{m}}, 35\, \nicefrac {\mathrm{Ns}}{\mathrm{m}})\). Figure 14 shows that the interpolated ROM is accurate.


Here are some further discussions:

  • The advantages of the ROM interpolation method based on the pole-residue realization.

    • We do not need to know the explicit parametric expression of the system matrices because the parameters only vary in the interpolation of ROMs.

    • It does not assume the existence of the FOM and works well also with ROMs built by data-driven MOR methods.

    • It can even interpolate ROMs built by different MOR methods.

    • Its computational cost is relatively insensitive to the number of parameters.

    • It can deal with complex parameter dependency, e.g., nonlinear or nonaffine dependence. Since we employ an external interpolation method to handle the parameter dependency, the proposed method is effective as long as the parameter dependency can be locally captured well by the interpolation method.

  • About stability. If we use linear interpolation for the pole-matching PMOR method, the stability is preserved since interpolating the poles in the left-half plane results in poles in the left-half plane. Using other interpolation methods, the stability can in general not be guaranteed. However, we can easily keep track of the interpolated poles in the pole-matching method. A straightforward solution is to use linear interpolation instead when an interpolated pole lies outside the left-half plane. The Loewner PMOR interpolation method always preserves the stability because the FRF at any parameter is a weighted sum of FRFs corresponding to stable systems.


A pole-matching PMOR method that interpolates MIMO linear ROMs in the pole-residue realization was proposed. The method was tested for Loewner-type data-driven ROMs, balanced truncation ROMs, ROMs built by the system identification method ssest, and Krylov-type ROMs. For all these numerical tests, the method gives accurate results. Together with the PMOR method that interpolates MIMO ROMs in the Loewner representation, the importance of realization in ROM interpolation was demonstrated.


  1. 1.

    Numerical experiment shows that if we simply remove these “non-dominant poles” of the nonparametric ROMs before interpolation, the accuracy of the pROM becomes much worse.



balanced truncation


full-order model


frequency response function


multiple-input multiple-output


model order reduction


parametric model order reduction


parametric reduced-order model


reduced-order model


single-input multiple-output


single-input single-out


singular value decomposition


  1. 1.

    Feldmann P, Freund RW. Efficient linear circuit analysis by Padé approximation via the Lanczos process. IEEE Trans Comput Aided Design Integr Circuits Syst. 1995;14:639–49.

  2. 2.

    Odabasioglu A, Celik M, Pileggi LT. PRIMA: passive reduced-order interconnect macromodeling algorithm. In: ICCAD ’97: Proceedings of the 1997 IEEE/ACM international conference on computer-aided design. Washington, DC: IEEE Computer Society; 1997. p. 58–65.

  3. 3.

    Feng L, Yue Y, Banagaaya N, Meuris P, Schoenmaker W, Benner P. Parametric modeling and model order reduction for (electro-)thermal analysis of nanoelectronic structures. J Math Ind. 2016;6(1):1–10.

  4. 4.

    Meerbergen K. Fast frequency response computation for Rayleigh damping. Int J Numer Methods Eng. 2008;73(1):96–106.

  5. 5.

    Han JS, Rudnyi EB, Korvink JG. Efficient optimization of transient dynamic problems in MEMS devices using model order reduction. J Micromech Microeng. 2005;15(4):822–32.

  6. 6.

    Li S, Yue Y, Feng L, Benner P, Seidel-Morgenstern A. Model reduction for linear simulated moving bed chromatography systems using Krylov-subspace methods. AIChE J. 2014;60(11):3773–83.

  7. 7.

    Benner P, Gugercin S, Willcox K. A survey of model reduction methods for parametric systems. SIAM Rev. 2015;57(4):483–531.

  8. 8.

    Baur U, Beattie CA, Benner P, Gugercin S. Interpolatory projection methods for parameterized model reduction. SIAM J Sci Comput. 2011;33(5):2489–518.

  9. 9.

    Rozza G, Huynh DBP, Patera AT. Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Arch Comput Methods Eng. 2008;15(3):229–75.

  10. 10.

    Amsallem D, Farhat C. Interpolation method for the adaptation of reduced-order models to parameter changes and its application to aeroelasticity. AIAA J. 2008;46:1803–13.

  11. 11.

    Panzer H, Mohring J, Eid R, Lohmann B. Parametric model order reduction by matrix interpolation. at-Automatisierungstechnik. 2010;58(8):475–84.

  12. 12.

    Amsallem D, Farhat C. An online method for interpolating linear parametric reduced-order models. SIAM J Sci Comput. 2011;33(5):2169–98.

  13. 13.

    Baur U, Benner P. Modellreduktion für parametrisierte Systeme durch balanciertes Abschneiden und Interpolation (Model Reduction for Parametric Systems Using Balanced Truncation and Interpolation). at-Automatisierungstechnik. 2009;57(8):411–20.

  14. 14.

    Baur U, Benner P, Greiner A, Korvink JG, Lienemann J, Moosmann C. Parameter preserving model reduction for MEMS applications. Math Comput Model Dyn Syst. 2011;17(4):297–317.

  15. 15.

    Mayo AJ, Antoulas AC. A framework for the solution of the generalized realization problem. Linear Algebra Appl. 2007;425(2–3):634–62.

  16. 16.

    Fu Z-F, He J. Modal Analysi. Waltham: Butterworth-Heinemann; 2001.

  17. 17.

    Kutz JN, Brunton SL, Brunton BW. Dynamic mode decomposition: data-driven modeling of complex systems. Philadelphia: Society of Industrial and Applied Mathematics; 2016.

  18. 18.

    Antoulas AC. Approximation of large-scale dynamical systems. Advances in design and control, vol. 6. Philadelphia: SIAM Publications; 2005.

  19. 19.

    Feng L, Rudnyi EB, Korvink JG. Preserving the film coefficient as a parameter in the compact thermal model for fast electro-thermal simulation. IEEE Trans Comput Aided Design Integr Circuits Syst. 2005;24(12):1838–47.

  20. 20.

    Yue Y, Meerbergen K. Using Krylov–Padé model order reduction for accelerating design optimization of structures and vibrations in the frequency domain. Int J Numer Methods Eng. 2012;90(10):1207–32.

  21. 21.

    Ionita AC, Antoulas AC. Data-driven parametrized model reduction in the Loewner framework. SIAM J Sci Comput. 2014;36(3):984–1007.

  22. 22.

    Yue Y, Feng L, Benner P. Interpolation of reduced-order models based on modal analysis. In: 2018 IEEE MTT-S international conference on numerical electromagnetic and metaphysics modeling and optimization (NEMO). 2018.

  23. 23.

    Gugercin S, Antoulas AC, Beattie C. \(\cal{H}_2\) model reduction for large-scale linear dynamical systems. SIAM J Matrix Anal Appl. 2008;30(2):609–38.

  24. 24.

    Benner P, Stykel T. Model order reduction for differential-algebraic equations: a survey. In: Ilchmann A, Reis T, editors. Surveys in differential-algebraic equations IV. Differential-algebraic equations forum. Cham: Springer; 2017. p. 107–60.

  25. 25.

    Golub GH, van Van Loan CF. Matrix computations. 3rd ed. London: The Johns Hopkins University Press; 1996.

  26. 26.

    Friswell MI. Candidate reduced order models for structural parameter estimation. J Vib Acoust. 1990;1:93–7.

  27. 27.

    Martins N, Lima LTG, Pinto HJCP. Computing dominant poles of power system transfer functions. IEEE Trans Power Syst. 1996;11(1):162–70.

  28. 28.

    Zhang Z, Hu X, Cheng CK, Wong N. A block-diagonal structured model reduction scheme for power grid networks. In: Design, automation & test in Europe (DATE), Grenoble, France. 2011. p. 1–6.

  29. 29.

    Yue Y, Meerbergen K. Accelerating optimization of parametric linear systems by model order reduction. SIAM J Optim. 2013;23(2):1344–70.

  30. 30.

    Chahlaoui Y, Van Dooren P. A collection of benchmark examples for model reduction of linear time invariant dynamical systems. Technical report 2002–2, SLICOT Working Note. 2002; Accessed 06 Aug 2019.

  31. 31.

    Ferranti F, Knockaert L, Dhaene T. Passivity-preserving parametric macromodeling by means of scaled and shifted state-space systems. IEEE Trans Microw Theory Tech. 2011;59(10):2394–403.

  32. 32.

    Ljung L. System identification: theory for the user. 2nd ed. Englewood Cliffs: Prentice Hall Information and System Sciences Series. Prentice Hall PTR; 1999.

  33. 33.

    Feldman P, Freund RW. Efficient linear circuit analysis by Padé approximation via the Lanczos process. IEEE Trans Comput Aided Des. 1995;14(5):639–49.

  34. 34.

    Bai Z. Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Appl Numer Math. 2002;43(1–2):9–44.

Download references


The model of the footbridge was developed within the frame of the research project IWT 090137 (“TRICON Prediction and control of human-induced vibrations of civil engineering structures”) and kindly provided as a numerical test case for the present research by Dr. Katrien Van Nimmen ( We would also like to thank Professor Athanasios C. Antoulas for bestowing the MATLAB code for the Loewner framework and the “FOM” model on us generously.

Authors’ contributions

All authors participated in the development of the methods and techniques. YY conducted the numerical experiment and wrote the draft paper. All authors did revisions and approved the final manuscript. All authors read and approved the final manuscript.


The research is funded by the Max Planck institute.

Availability of data and materials

The microthruster model and the “FOM” model is available at MOR Wiki: The footbridge model is not open to the public.

Competing interests

The authors declare that they have no competing interests.

Author information

Correspondence to Yao Yue.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Appendix A: A Proof of Theorem 2


$$\begin{aligned} \left[ E_\text {a}(p)\right] _{i,j}&= \sum _{l=1}^{n_p} \left[ E_{l}\right] _{i,j} \phi _l(p) = -\sum _{l=1}^{n_p} \left[ \mathbb {L}_{l}\right] _{i,j} \phi _l(p) =-\sum _{l=1}^{n_p} \frac{v_i(p_l)r_j-\ell _i w_j (p_l)}{\mu _i-\lambda _j} \phi _l(p)\\&=\frac{\left( \displaystyle \sum \nolimits _{l=1}^{n_p} v_i(p_l)\phi _l(p)\right) r_j-\ell _i \left( \displaystyle \sum \nolimits _{l=1}^{n_p} w_j(p_l)\phi _l(p)\right) }{\mu _i-\lambda _j}\\&= - \left[ \widetilde{\mathbb {L}}(p)\right] = \left[ E_\text {b}(p)\right] _{i,j}, \end{aligned}$$

where \(\widetilde{\mathbb {L}}(p)\) denotes the Loewner matrix computed with the interpolated data at p, namely (47). This proves \(E_\text {a}(p)=E_\text {b}(p)\). Similarly, we prove \(A_\text {a}(p)=A_\text {b}(p)\).

$$\begin{aligned} B_\text {a}(p)=\sum _{l=1}^{n_p}B_l \phi _l(p)= \sum _{l=1}^{n_p}V_l \phi _l(p)= \left[ \sum _{l=1}^{n_p} v_1(p_l)\phi _l(p), \ldots , \sum _{l=1}^{n_p} v_{n_L}(p_l)\phi _l(p) \right] =B_\text {b}(p). \end{aligned}$$

Similarly, we can prove \(C_\text {a}(p)=C_\text {b}(p)\).

Therefore, the pROM

$$\begin{aligned} (E_\text {a}(p),A_\text {a}(p),B_\text {a}(p),C_\text {a}(p)) \end{aligned}$$

and the pROM

$$\begin{aligned} (E_\text {b}(p),A_\text {b}(p),B_\text {b}(p),C_\text {b}(p)) \end{aligned}$$

are equal. \(\square \)

Appendix B: A Proof of Theorem 3


Define \(I_l=[0_{n\times n(l-1)}, I_{n\times n}, 0_{n\times n(n_p-1)}]^T\). Then,

$$\begin{aligned} s_l \mathbb {L}-\mathbb {L}_{\sigma l} = Y \Sigma _H X_H I_l = I_l^T Y_V \Sigma _V X. \end{aligned}$$

Since Y is orthonormal,

$$\begin{aligned} \Sigma _H X_H I_l=Y^TI_l^T Y_V \Sigma _V X. \end{aligned}$$


$$\begin{aligned} s_l \mathbb {L}-\mathbb {L}_{\sigma l} = Y Y^TI_l^T Y_V \Sigma _V X. \end{aligned}$$

Compute the SVD of \(Y^TI_l^T Y_V \Sigma _V\) as

$$\begin{aligned} Y^TI_l^T Y_V \Sigma _V=Y_l'\Sigma _l' X_l', \end{aligned}$$

and the SVD of \(s_l \mathbb {L}-\mathbb {L}_{\sigma l}\) is

$$\begin{aligned} \text {svd}(s_l \mathbb {L}_l - \mathbb {L}_{\sigma l})=(Y Y_l') \Sigma _l' (X_l'X) \end{aligned}$$

because both \(Y Y_l'\) and \(X_l'X\) are orthonormal and \(\Sigma '\) is diagonal with non-negative diagonal entries. Therefore,

$$\begin{aligned} \text {rowspan}\{X_l\}= & {} \text {rowspan}\{ X_l' X \}\subseteq \text {rowspan}\{X\}, \\ \text {colspan}\{Y_l\}= & {} \text {colspan}\{Y Y_l'\} \subseteq \text {colspan}\{Y\}. \end{aligned}$$

\(\square \)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark


  • Parametric model order reduction
  • Interpolation methods
  • Data-driven methods
  • Pole analysis