Formulation
Problem statement
We suppose that we are given an open domain \(\Omega \subset \mathbb {R}^{d}\), d=1,2 or 3, with boundary ∂
Ω. We then let X denote the Hilbert space
$$X \equiv \left\{ v \in H^{1}(\Omega) \colon v_{\partial\Omega_{D}} = 0 \right\} \, $$
where ∂
Ω
_{
D
}⊂∂
Ω is the portion of the boundary on which we enforce homogeneous Dirichlet boundary conditions. We suppose that X is endowed with an inner product (·,·)_{
X
} and induced norm ∥·∥_{
X
}. Recall that for any domain in \(\mathbb {R}^{d}\),
$$\begin{array}{*{20}l} H^{1} (\mathcal{O})& \equiv \left\{ v \in L^{2} (\mathcal{O}) \colon \nabla v \in (L^{2} (\mathcal{O}))^{d} \right\},\\ \text{where}\ L^{2} (\mathcal{O}) &\equiv \left\{ v\ \text{measurable over}\ \mathcal{O} \colon \int_{\mathcal{O}} v^{2}\ \text{finite}\, \right\}. \end{array} $$
Furthermore, let Y≡L
^{2}(Ω).
We now introduce an abstract formulation for our eigenvalue problem. For any \(\mu \in \mathcal {D}\), let \(a(\cdot,\cdot ;\mu):X\times X\rightarrow \mathbb {R}\), and \(m_{}(\cdot,\cdot ;\mu):X_{}\times X_{}\rightarrow \mathbb {R}\) denote continuous, coercive, symmetric bilinear form with respect to Xand Y, respectively. We suppose that \(X^{\mathcal {N}} \subset X\) is a finite element space of dimension . Given a parameter \(\mu \in \mathcal {D} \subset \mathbb {R}^{P}\), where is our parameter domain of dimension P, we find the set of eigenvalues and eigenvectors (λ(μ),u(μ)), where \(\lambda (\mu) \in \mathbb {R}_{> 0}\) and \(u(\mu) \in X^{\mathcal {N}}\) satisfy
$$\begin{array}{@{}rcl@{}} a(u(\mu),v;\mu) &=& \lambda(\mu)m(u(\mu),v;\mu), \quad \forall v \in X^{\mathcal{N}}, \end{array} $$
((1))
$$\begin{array}{@{}rcl@{}} m(u(\mu),u(\mu);\mu) &=& 1. \end{array} $$
((2))
We assume that the eigenvalues λ
_{
n
}(μ) are sorted such that \(0 < \lambda _{1}(\mu) \leq \lambda _{2}(\mu) \ldots \leq \lambda _{\mathcal {N}}(\mu)\), and to each eigenvalue λ
_{
n
}(μ) we associate a corresponding eigenvector u
_{
n
}(μ). We can have multiplicities greater than one and hence we can have equal successive eigenvalues λ
_{
n
}(μ)=⋯=λ
_{
n+k
}(μ) but each associated to linearly independent eigenvectors.
The parametric dependence of the problem usually takes the form of variable PDE coefficients or variable geometry. For instance, in linear elasticity, the vector μ can contain the different Young’s modulus values of different subdomains, as well as the parameters of some mapping function describing the geometrical variability.
We now define a surrogate eigenvalue problem that will be convenient for subsequent developments. For a given “shift factor” \(\sigma \in \mathbb {R}_{\geq 0}\), we modify (1), (2) such that for any \(\mu \in \mathcal {D}\), we find \(\tau (\mu,\sigma) \in \mathbb {R}\) and \(\chi (\mu,\sigma) \in X^{\mathcal {N}}\) that satisfy
$$\begin{array}{@{}rcl@{}} \mathcal{B}(\chi(\mu,\sigma),v;\mu;\sigma) &=& \tau(\mu,\sigma)a(\chi(\mu,\sigma),v;\mu), \quad \forall v \in X^{\mathcal{N}}, \quad \end{array} $$
((3))
$$\begin{array}{@{}rcl@{}} a(\chi(\mu,\sigma),\chi(\mu,\sigma);\mu) &=& 1. \end{array} $$
((4))
Here
$$ \mathcal{B}(w,v;\mu;\sigma) \equiv a(w,v;\mu) \sigma m(w,v;\mu) $$
((5))
is our “shifted” bilinear form. Note that we change the bilinear form on the right hand side from m(·,·) to a(·,·), which corresponds to a different norm. This choice is motivated by error estimation, presented later in the paper, as it permits to derive relative error estimates for the eigenvalue λ
_{
n
}(μ).
We also sort the set of eigenvalues such that \(\tau _{1}(\mu,\sigma) \leq \tau _{2}(\mu,\sigma) \ldots \leq \tau _{\mathcal {N}}(\mu,\sigma)\) – note that due to the shift the first eigenvalues can now be negative. It is clear that \(\chi _{n}(\mu,\sigma) = \frac {1}{\sqrt {\lambda _{n}(\mu)}}u_{n}(\mu)\) for any \(\sigma \in \mathbb {R}\), so we shall henceforth write χ
_{
n
}(μ). Also
$$ \tau_{n}(\mu,\sigma) = \frac{\lambda_{n}(\mu)  \sigma}{\lambda_{n}(\mu)}, $$
((6))
so that
$$\begin{array}{@{}rcl@{}} \tau_{n}(\mu,\sigma) >0, && \text{if}\enspace 0 \leq \sigma < \lambda_{n}(\mu), \end{array} $$
((7))
$$\begin{array}{@{}rcl@{}} \tau_{n}(\mu,\sigma) =0, && \text{if}\enspace \sigma = \lambda_{n}(\mu), \end{array} $$
((8))
$$\begin{array}{@{}rcl@{}} \tau_{n}(\mu,\sigma) <0, && \text{if}\enspace \sigma > \lambda_{n}(\mu), \end{array} $$
((9))
for \(n = 1,\ldots,\mathcal {N}\).
Remark 2.1.
The reason for introducing the surrogate eigenvalue problem (3) is that when the condition (8) is achieved, the right hand side of (3) vanishes and we can consider the left hand side in isolation as a linear problem to which we apply the SCRBE method, as described in the following sections. Two points have to be make clear about the parameter σ:

σ is meant to approximate a given eigenvalue λ
_{
n
}(μ) of the original eigenproblem (1) by virtue of property (8),

the value for which σ=λ
_{
n
}(μ) will be automatically determined by a direct search algorithm as presented in Section ‘Eigenvalue computation’.
Static condensation
We now move to the component level. We suppose that the system domain is naturally decomposable into I interconnected parametrized components. Each component i is associated with a subdomain Ω
_{
i
}, where
$$\overline{\Omega} = \bigcup\limits_{i=1}^{I} \overline{\Omega}_{i}, \qquad \Omega_{i} \cap \Omega_{i'} = \emptyset, \text{for}\ i \neq i' \ . $$
We now introduce the notion of “port” that is commonly used in the literature related to CMS methods. A port corresponds to the interface shared by two components that are connected together. When looking at the global system, we will describe the ports as global, whereas when considering a single component, we will describe the ports as local. We say that components i and i
^{′} are connected at global port p if \(\overline \Omega _{i} \cap \overline \Omega _{i'}=\Gamma _{p}\neq \emptyset \), where 1≤p≤n
^{Γ} and n
^{Γ} is the total number of global ports in the system. We also say that \({\gamma _{i}^{j}}=\Gamma _{p}\) and \(\gamma _{i'}^{j'}=\Gamma _{p}\) are local ports of components i and i
^{′} respectively, where \(1\leq j \leq n_{i}^{\gamma }\) is the total number of local ports in component i. Figure 1 shows an example of a three component system, with the corresponding global and local port definitions.
We assume that the FE space \(X^{\mathcal {N}}\) conforms to our components and ports, hence we can define the discrete spaces \(X_{i}^{\mathcal {N}}\) and \(Z_{p}^{\mathcal {N}}\) that are simply the restrictions of \(X^{\mathcal {N}}\) to component i and global port p. For given i, let \(X^{\mathcal {N}}_{i;0}\) denote the “component bubble space” — the restriction of \(X^{\mathcal {N}}\) to Ω
_{
i
} with homogeneous Dirichlet boundary conditions on each \({\gamma _{i}^{j}}, 1\leq j \leq n_{i}^{\gamma }\),
$$X^{\mathcal{N}}_{i ; 0} \equiv \left\{ v_{\Omega_{i}} \colon v \in X^{\mathcal{N}}; \: v _{{\gamma_{i}^{j}}} = 0, \, 1\leq j\leq n_{i}^{\gamma}\right\}. $$
We denote by \(\mathcal {N}_{p}^{\Gamma }\) the dimension of the port space \(Z_{p}^{\mathcal {N}}\) associated with global port p, and we say that the global port p has \(\mathcal {N}_{p}^{\Gamma }\) degrees of freedom (dof). For each component i, we denote by k
^{′} a local port dof number, and K
_{
i
} the total numbers of dof on its local ports, such that 1≤k
^{′}≤K
_{
i
}. We then introduce the map \(\mathcal {P}_{i}(k')=(p,k)\) which associate a local port dof k
^{′} in component i to its global port representation: global port p and dof k, \(1\leq k \leq \mathcal {N}_{p}^{\Gamma }\).
To formulate our static condensation procedure we must first introduce the basis functions for the port space \(Z_{p}^{\mathcal {N}}\) as \(\{\zeta _{p,1},\cdots,\zeta _{p,\mathcal {N}_{p}^{\Gamma }}\}\). The particular choice for these functions is not important for now, but it becomes critical when dealing with port reduction – we refer to Section ‘Port reduction’. For a local port dof number k
^{′} such that \(\mathcal {P}_{i}(k')=(p,k)\), we then introduce the interface function \(\psi ^{i}_{k'} \in X^{\mathcal {N}}_{i}\), which is the harmonic extension of the associated port space basis function into the interior of the component domain Ω
_{
i
}, and satisfies
$$ \int_{\Omega_{i}} \nabla \psi^{i}_{k'} \cdot \nabla v = 0,\enspace \forall v \in X^{\mathcal{N}}_{i ; 0}, $$
((10))
$$ \psi^{i}_{k'} = \left\{ \begin{array}{ll} \zeta_{p,k} & \text{on}\,\, \Gamma_{p} \\ 0 & \text{on}\,\, {\gamma_{i}^{j}} \neq \Gamma_{p}, \ 1\leq j \leq n_{i}^{\gamma}. \end{array} \right. $$
((11))
We show in Figure 2 an example of port basis functions and interface functions.
If components i and j are connected, then for each matching local port dofs k
_{
i
} and k
_{
j
} such that \(\mathcal {P}_{i}(k_{i})=\mathcal {P}_{j}(k_{j})=(p,k)\), we define the global interface function \(\Psi _{p,k}\in X^{\mathcal {N}}\) as
$$ \Psi_{p,k}= \left\{ \begin{array}{ll} \psi^{i}_{k_{i}} & \text{on}\ \Omega_{i} \\ \psi^{j}_{k_{j}} & \text{on}\ \Omega_{j} \\ 0 & \text{elsewhere}. \end{array} \right. $$
((12))
We will now develop an expression for χ
_{
n
}(μ) which just involves dof on the ports by virtue of elimination of the interior dof given that σ=λ
_{
n
}(μ) – starting from (13) to finally arrive at (18). Let us suppose that we set σ=λ
_{
n
}(μ) (for some n) so that the righthand side of (3) vanishes. Then, for \(\chi _{n}(\mu) \in X^{\mathcal {N}}\) we have
$$\mathcal{B}(\chi_{n}(\mu),v;\mu;\sigma) = 0, \quad \text{for all}\ v \in X^{\mathcal{N}}. $$
We then express \(\chi _{n}(\mu) \in X^{\mathcal {N}}\) in terms of “interface” and “bubble” contributions,
$$ \chi_{n}(\mu) = \sum_{i=1}^{I} b_{i}(\mu,\sigma) + \sum_{p=1}^{n^{\Gamma}}\sum_{k=1}^{\mathcal{N}_{p}^{\Gamma}}U_{p,k}(\mu,\sigma)\Psi_{p,k}, $$
((13))
where the U
_{
p,k
}(μ,σ) are interface function coefficients corresponding to the port p, and \(b_{i}(\mu,\sigma) \in X_{i;0}^{\mathcal {N}}\). Here χ
_{
n
} is independent of σ, but we shall see shortly that we will need b
_{
i
} and U
_{
k,p
} to be σdependent in general.
We then restrict to a single component i to obtain
$$ \mathcal{B}_{i}(\chi_{n}(\mu),v;\mu;\sigma) = 0, \quad \text{for all}\ v \in X^{\mathcal{N}}_{i;0}, $$
((14))
where \(\mathcal {B}_{i}(w,v;\mu ;\sigma) \equiv a_{i}(w,v;\mu)  \sigma m_{i}(w,v;\mu)\), and where a
_{
i
} and m
_{
i
} indicate the restrictions of a and m to Ω
_{
i
}, respectively. Substitution of (13) into (14) leads to
$$ \mathcal{B}_{i}(b_{i}(\mu,\sigma),v;\mu;\sigma) + \sum_{k=1}^{K_{i}}U_{\mathcal{P}_{i}(k)}(\mu,\sigma)\mathcal{B}_{i}(\psi_{i,k},v;\mu;\sigma) = 0, $$
((15))
for all \(v \in X^{\mathcal {N}}_{i;0}\).
It can be shown from linearity of the above equation that we can reconstruct b
_{
i
}(μ,σ) as
$$b_{i}(\mu,\sigma) = \sum_{k=1}^{K_{i}}U_{\mathcal{P}_{i}(k)}(\mu,\sigma) b_{i,k}(\mu,\sigma), $$
where \(b_{i,k}(\mu,\sigma) \in X_{i;0}^{\mathcal {N}}\) satisfies
$$ \mathcal{B}_{i}(b_{i,k}(\mu,\sigma),v;\mu;\sigma) =  \mathcal{B}_{i}(\psi_{i,k},v;\mu;\sigma), \quad \forall v \in X_{i;0}^{\mathcal{N}}. $$
((16))
Let \((\lambda _{i,n}(\mu),\chi _{i,n}(\mu)) \in \mathbb {R} \times X_{i;0}^{\mathcal {N}}\) denote an eigenpair associated with the n local eigenproblem
$$ a_{i}(\chi_{i,n}(\mu),v;\mu) = \lambda_{i,n}(\mu) m_{i}(\chi_{i,n}(\mu),v;\mu), \quad \forall v \in X_{i;0}^{\mathcal{N}}, $$
((17))
then, since
$$\begin{array}{@{}rcl@{}} \inf_{v \in X_{i;0}^{\mathcal{N}}} \frac{\mathcal{B}_{i}(v,v;\mu;\sigma)}{\v\_{X,i}^{2}} &=& \inf_{v \in X_{i;0}^{\mathcal{N}}} \frac{a_{i}(v,v;\mu)  \sigma m_{i}(v,v;\mu)}{\v\_{X,i}^{2}}\\ &\geq& \inf_{v \in X_{i;0}^{\mathcal{N}}} \frac{a_{i}(v,v;\mu)  \sigma m_{i}(v,v;\mu)}{m_{i}(v,v;\mu)} \inf_{v \in X_{i;0}^{\mathcal{N}}} \frac{m_{i}(v,v;\mu)}{\v\_{X,i}^{2}}\\ &=& (\lambda_{i,1}(\mu)  \sigma) \inf_{v \in X_{i;0}^{\mathcal{N}}} \frac{m_{i}(v,v;\mu)}{\v\_{X,i}^{2}}, \end{array} $$
the bilinear form \(\mathcal {B}_{i}(\cdot,\cdot ;\mu ;\sigma)\) is coercive on \(X_{i;0}^{\mathcal {N}}\) if σ<λ
_{
i,1}(μ), where λ
_{
i,1}(μ) is the smallest eigenvalue of (17). Hence (16) has a unique solution under this condition. Note that we expect that λ
_{
i,1}(μ)>λ
_{1}(μ), and even λ
_{
i,1}(μ)>λ
_{
n
}(μ) for n=2 or 3 or 4 — of course in practice the balance between λ
_{
n
} and \(\lambda _{i,n^{\prime }}\) will depend on the details of a particular problem.
Now for \(1\leq k \leq \mathcal {N}_{p}^{\Gamma }\) and each p, let
$$\Phi_{p,k}(\mu,\sigma) = \Psi_{p,k} + \sum_{i,k' s.t. P_{i}(k')=(p,k)} b_{i,k'}(\mu,\sigma), $$
and let us define the “skeleton” space \(X_{\mathcal {S}}(\mu,\sigma)\) as
$$X_{\mathcal{S}}(\mu,\sigma) \equiv \text{span}\{\Phi_{p,k}(\mu,\sigma):1\leq p \leq n^{\Gamma}, 1 \leq k \leq \mathcal{N}_{p}^{\Gamma}\}. $$
This space is of dimension \(n_{\text {sc}}=\sum _{p=1}^{n^{\Gamma }} \mathcal {N}_{p}^{\Gamma }\).
Remark 2.2.
Note that the interface functions are intermediate quantities that are completed with bubble functions. Although the interface functions are the result of a simple harmonic lifting with the homogeneous Laplace operator, the subsequent bubble functions are computed based on the problemdependent a and m bilinear forms, hence they capture the possible heterogeneities intrinsic to the problem. Hence the skeleton space \(X_{\mathcal {S}}(\mu,\sigma)\) is suitable for approximation.
We restrict (13) to a single component i to see that for σ=λ
_{
n
}(μ) we obtain
$$\chi_{n}(\mu)_{\Omega_{i}} = \sum_{k=1}^{K_{i}} U_{\mathcal{P}_{i}(k)}(\mu,\sigma) \left(b_{i,k}(\mu,\sigma) + \psi_{i,k}\right). $$
This then implies
$$ \chi_{n}(\mu) = \sum_{p=1}^{n^{\Gamma}} \sum_{k=1}^{\mathcal{N}_{p}^{\Gamma}} \ U_{p,k}(\mu,\sigma) \: \Phi_{p,k}(\mu,\sigma) \in X_{\mathcal{S}}(\mu,\sigma). $$
((18))
Then, for σ=λ
_{
n
}(μ) and \(\mu \in \mathcal {D}\), we are able to solve for the coefficients U
_{
p,k
}(μ,σ) from the static condensation eigenvalue problem on \(X_{\mathcal {S}}(\mu,\sigma)\): find \(\chi _{n}(\mu) \in X_{\mathcal {S}}(\mu,\sigma)\), such that
$$\begin{array}{@{}rcl@{}} \mathcal{B}(\chi_{n}(\mu),v;\mu;\sigma) &=& 0, \quad \forall v \in X_{\mathcal{S}}(\mu,\sigma),\quad \end{array} $$
((19))
$$\begin{array}{@{}rcl@{}} a(\chi_{n}(\mu),\chi_{n}(\mu);\mu) &=& 1. \end{array} $$
((20))
We now relax the condition σ=λ
_{
n
}(μ) to obtain the following problem: For σ∈[0,σ
_{max}] and \(\mu \in \mathcal {D}\), find \((\overline {\tau }_{n}(\mu,\sigma),\overline {\chi }_{n}(\mu,\sigma)) \in (\mathbb {R}, X_{\mathcal {S}}(\mu,\sigma))\), such that
$$\begin{array}{@{}rcl@{}} \mathcal{B}(\overline{\chi}_{n}(\mu,\sigma),v;\mu;\sigma) &=& \overline{\tau}_{n}(\mu,\sigma)a(\overline{\chi}_{n}(\mu;\sigma),v;\mu), \quad \forall v \in X_{\mathcal{S}}(\mu,\sigma), \qquad \end{array} $$
((21))
$$\begin{array}{@{}rcl@{}} a(\overline{\chi}_{n}(\mu,\sigma),\overline{\chi}_{n}(\mu,\sigma);\mu) &=& 1. \end{array} $$
((22))
It is important to note that this new eigenproblem (21) (22) differs from (3) (4) in two ways: first, we consider a subspace \(X_{\mathcal {S}}(\mu,\sigma)\) of \(X^{\mathcal {N}}\), and as a consequence \(\overline {\tau }_{n}(\mu,\sigma) \geq \tau _{n}(\mu,\sigma)\); second, the subspace \(X_{\mathcal {S}}(\mu,\sigma)\), unlike \(X^{\mathcal {N}}\), depends on σ, and furthermore only for σ=λ
_{
n
} does the subspace \(X_{\mathcal {S}}(\mu,\sigma)\) reproduce the eigenfunction χ
_{
n
}(μ). We now show
Proposition 2.1.
Suppose σ<λ
_{
i,1}(μ) for each 1≤i≤I to ensure that the static condensation is wellposed.

(i)
\(\overline {\tau }_{n}(\mu,\sigma) \geq \tau _{n}(\mu,\sigma)\), \(n=1,\ldots,\text {dim}(X_{\mathcal {S}}(\mu,\sigma))\),

(ii)
τ
_{
n
}(μ,σ)=0 if and only if σ=λ
_{
n
}(μ),

(iii)
σ=λ
_{
n
}(μ) if and only if there exists some n
^{′} such that \(\overline {\tau }_{n'}(\mu,\sigma) = 0\).
Proof.

(i)
The case n=1 follows from the Rayleigh quotients
$$ \tau_{1}(\mu,\sigma) = \inf_{w \in X^{\mathcal{N}}} \frac{\mathcal{B}(w,w;\mu;\sigma)}{a(w,w;\mu)}, $$
((23))
and
$$ \overline{\tau}_{1}(\mu,\sigma) = \inf_{w \in X_{\mathcal{S}}(\mu,\sigma)} \frac{\mathcal{B}(w,w;\mu;\sigma)}{a(w,w;\mu)}, $$
((24))
and fact that \(X_{\mathcal {S}}(\mu,\sigma) \subset X^{\mathcal {N}}\).
For n>1, the CourantFischerWeyl minmax principle [17] states that for an arbitrary ndimensional subspace of \(X^{\mathcal {N}}\), S
_{
n
}, we have
$$ \eta_{n}(\mu,\sigma) \equiv \max_{w \in S_{n}} \frac{\mathcal{B}(w,w;\mu;\sigma)}{a(w,w;\mu)} \geq \tau_{n}(\mu,\sigma). $$
((25))
Let \(S_{n} \equiv \text {span}\{ \overline {\chi }_{m}(\mu,\sigma), m=1,\ldots,n\} \subset X_{\mathcal {S}}(\mu,\sigma)\). Then \(\eta _{n}(\mu,\sigma) = \overline {\tau }_{n}(\mu,\sigma)\), and the result follows.

(ii)
This equivalence is due to (8).

(iii)
(⇐) Suppose σ=λ
_{
n
}(μ) for some n, then by construction \(\chi _{n}(\mu,\sigma) \in X_{\mathcal {S}}(\mu,\sigma)\). Since the same operator appears in both (19) and (21), it follows that χ
_{
n
}(μ,σ) is also eigenmode for (21), (22) with corresponding eigenvalue 0. That is, for some n
^{′}, \(\overline {\tau }_{n'}(\mu,\sigma) = 0\) is an eigenvalue of (21), (22).
(⇒) Suppose \(\overline {\tau }_{n'}(\mu,\sigma) = 0\) for some index n
^{′}. Then \(\overline {\chi }_{n'}(\mu,\sigma)\) satisfies (19), (20), or equivalently, (3), (4) for τ
_{
n
}(μ,σ)=0. From part (ii) of this Proposition, this implies that σ=λ
_{
n
}(μ).
Remark 2.3.
Regarding our method, the main result is 2.1(iii), which informs on how to recover eigenvalues of the original problem (3), (4) from the shifted and condensed problem (21), (22): we look for the values of σ such that (21), (22) has a zero eigenvalue. Note that in 2.1(iii), the equivalence between \(\overline {\tau }_{n'}(\mu,\sigma) = 0\) and σ=λ
_{
n
}(μ) possibly happens for n
^{′}≠n. In practice though, we always have n
^{′}=n and there is a onetoone correspondence between the original problem and the shifted and condensed system which make the eigenvalues much easier to track. We are not able to demonstrate that n
^{′}=n in all cases, but assuming that property, we can demonstrate some stronger properties (see 4) that we will use to derive error estimates.
To assemble an algebraic system for the static condensation eigenproblem, we insert (18) into (21), (22) to arrive at
$$\begin{array}{@{}rcl@{}} &&\sum_{p=1}^{n^{\Gamma}}\sum_{k=1}^{\mathcal{N}_{p}^{\Gamma}}U_{p,k}(\mu,\sigma) \mathcal{B}(\Phi_{p,k}(\mu,\sigma),v;\mu;\sigma) \\ &&= \overline{\tau}(\mu,\sigma)\sum_{p=1}^{n^{\Gamma}}\sum_{k=1}^{\mathcal{N}_{p}^{\Gamma}}U_{p,k}(\mu,\sigma) a(\Phi_{p,k}(\mu,\sigma),v;\mu;\sigma), \quad \forall v \in X_{\mathcal{S}}, \end{array} $$
((26))
$$\begin{array}{@{}rcl@{}} && \sum_{p=1}^{n^{\Gamma}}\sum_{k=1}^{\mathcal{N}_{p}^{\Gamma}}U_{p,k}(\mu,\sigma) a(\Phi_{p,k}(\mu,\sigma),\Phi_{p,k}(\mu,\sigma);\mu;\sigma)= 1. \end{array} $$
((27))
We now define our local stiffness and mass matrices \(\mathbb {A}^{i}(\mu,\sigma), \mathbb {M}^{i}(\mu,\sigma) \in \mathbb {R}^{K_{i}\times K_{i}}\) for component i, which have entries
$$\begin{array}{@{}rcl@{}} \mathbb{A}^{i}_{k',k}(\mu,\sigma) &=& a_{i}(\psi_{i,k}+b_{i,k}(\mu,\sigma),\psi_{i,k'}+b_{i,k'}(\mu,\sigma);\mu), \\ \mathbb{M}^{i}_{k',k}(\mu,\sigma) &=& m_{i}(\psi_{i,k}+b_{i,k}(\mu,\sigma),\psi_{i,k'}+b_{i,k'}(\mu,\sigma);\mu), \end{array} $$
for 1≤k,k
^{′}≤K
_{
i
}. We may then assemble the global system with matrices \(\mathbb {B}(\mu,\sigma),\mathbb {A}(\mu,\sigma) \in \mathbb {R}^{n_{\textit {sc}}\times n_{\textit {sc}}}\), of dimension \(n_{\textit {sc}}=\sum _{p=1}^{n^{\Gamma }}\mathcal {N}_{p}^{\Gamma }\): given a \(\sigma \in \mathbb {R}\) and \(\mu \in \mathcal {D}\), we consider the eigenproblem
$$\begin{array}{@{}rcl@{}} \mathbb{B}(\mu,\sigma)\mathbb{V}(\mu,\sigma) &=& \overline{\tau}(\mu,\sigma)\mathbb{A}(\mu,\sigma){\mathbb{V}}(\mu,\sigma), \end{array} $$
((28))
$$\begin{array}{@{}rcl@{}} \mathbb{V}(\mu,\sigma)^{T}\mathbb{A}(\mu,\sigma)\mathbb{V}(\mu,\sigma) &=& 1, \end{array} $$
((29))
where
$$ \mathbb{B}(\mu,\sigma) \equiv \mathbb{A}(\mu,\sigma)\sigma\mathbb{M}(\mu,\sigma). $$
((30))
As explained above, in order to find the eigenvalues of the original problem (3), (4), we need to find the values of σ for which (28), (29) has a zero eigenvalue. When performing this search, for each new value of σ that is considered, we need to perform the assembly of the static condensation system (28), which involves many finite element computations at the component level in order to get the bubble functions (16), and is potentially costly. Note that we also need to reassemble (28) when the parameters μ of the problem change. In order to dramatically reduce the computational cost of this assembly, we will use reduced order modeling techniques as described in the next Sections ‘Reduced basis static condensation system’ and ‘Port reduction’.
Reduced basis static condensation system
Reduced basis bubble approximation
In the static condensation reduced basis element (SCRBE) method [10], we replace the FE bubble functions b
_{
i,k
}(μ,σ) with reduced basis approximations. These RB approximations are significantly less expensive to evaluate (following an RB “offline” preprocessing step) than the original FE quantities, and hence the computational cost associated with the formation of the (now approximate) static condensation system is significantly reduced. We thus introduce the RB bubble function approximations
$$ \tilde{b}_{i,k}(\mu,\sigma) \approx b_{i,k}(\mu,\sigma) $$
((31))
for a parameter domain \((\mu,\sigma) \in \mathcal {D}\times [0,\sigma _{\max }]\), where
$$\sigma_{\max} = \epsilon_{\sigma}\min_{\mu \in \mathcal{D}} \min\limits_{1\leq i\leq I}\lambda_{i,1}(\mu). $$
((32))
Here ε
_{
σ
}(<1) is a “safety factor” which ensures that we honor the condition σ<λ
_{
i,1}(μ) for all 1≤i≤I. Next, we let
$$\widetilde\Phi_{p,k}(\mu,\sigma) = \Psi_{p,k} + \sum_{i,k_{i} s.t. P_{i}(k_{i})=(p,k)} \tilde b_{i,k_{i}}(\mu,\sigma), $$
and define our RB static condensation space \(\widetilde {X}_{\mathcal {S}}(\mu,\sigma) \subset X^{\mathcal {N}}\) as
$$\widetilde{X}_{\mathcal{S}}(\mu,\sigma) = \text{span}\{\widetilde{\Phi}_{p,k}(\mu,\sigma):1\leq p\leq n^{\Gamma}, 1 \leq k \leq \mathcal{N}_{p}^{\Gamma}\}. $$
(Note that \(\widetilde {X}_{\mathcal {S}}(\mu,\sigma) \not \subset X_{\mathcal {S}}(\mu,\sigma)\)).
Remark 3.1.
As opposed to CMS where the static condensation space is built from local component natural modes, the RB static condensation space \(\widetilde {X}_{\mathcal {S}}(\mu,\sigma)\) is built from RB bubbles that can accommodate for any global mode shape thanks to their (μ,σ) parametrization. The only restriction is due to condition (32) which means that we only ensure to capture global modes for which the wavelength is typically greater than a component’s size.
We then define the RB eigenproblem: given \((\mu,\sigma) \in \mathcal {D}\times [0,\sigma _{\text {max}}]\), find the eigenpairs \((\widetilde {\overline {\tau }}_{n}(\mu,\sigma),\widetilde {\mathbb {V}}_{n}(\mu,\sigma))\) that satisfy
$$\begin{array}{@{}rcl@{}} \widetilde{\mathbb{B}}(\mu,\sigma)\widetilde{\mathbb{V}}(\mu,\sigma) &=& \widetilde{\overline{\tau}}(\mu,\sigma)\widetilde{\mathbb{A}}(\mu,\sigma)\widetilde{\mathbb{V}}(\mu,\sigma), \end{array} $$
((33))
$$\begin{array}{@{}rcl@{}} \widetilde{\mathbb{V}}(\mu,\sigma)^{T}\widetilde{\mathbb{A}}(\mu,\sigma)\widetilde{\mathbb{V}}(\mu,\sigma) &=& 1, \end{array} $$
((34))
where \(\widetilde {\mathbb {B}}(\mu,\sigma),\widetilde {\mathbb {A}}(\mu,\sigma)\) are constructed componentbycomponent from
$$\begin{array}{@{}rcl@{}} \widetilde{\mathbb{A}}^{i}_{k',k}(\mu,\sigma) &=& a_{i}(\psi_{i,k}+\tilde{b}_{i,k}(\mu,\sigma),\psi_{i,k'}+\tilde{b}_{i,k'}(\mu,\sigma);\mu), \end{array} $$
((35))
$$\begin{array}{@{}rcl@{}} \widetilde{\mathbb{M}}^{i}_{k',k}(\mu,\sigma) &=& m_{i}(\psi_{i,k}+\tilde{b}_{i,k}(\mu,\sigma),\psi_{i,k'}+\tilde{b}_{i,k'}(\mu,\sigma);\mu), \end{array} $$
((36))
for 1≤k,k
^{′}≤K
_{
i
}, and where
$$ \widetilde{\mathbb{B}}^{i}(\mu,\sigma) \equiv \widetilde{\mathbb{A}}^{i}(\mu,\sigma)\sigma\widetilde{\mathbb{M}}^{i}(\mu,\sigma). $$
((37))
Reduced basis error estimator
We now consider error estimation for our RB approximations. In order to derive error estimates, we will use Hypothesis A.1 which is related to Remark 2.3, and reads
$$\sigma = \lambda_{n}(\mu) \Leftrightarrow \overline{\tau}_{n}(\mu,\sigma) = 0.$$
Note that this hypothesis is solely used for error estimation, the computational method itself does not rely on this assumption.
First, since \(\widetilde {X}_{\mathcal {S}}(\mu,\sigma) \subset X^{\mathcal {N}}\), by the same argument as part (i) of Proposition 2.1, we have
Corollary 3.1.
$$ \widetilde{\overline{\tau}}_{n}(\mu,\sigma) \geq \tau_{n}(\mu,\sigma), \quad n=1,2,\ldots,n_{\text{sc}}. $$
((38))
□
We define the residual \(r_{i,k}(\cdot ;\mu,\sigma):X_{i;0}^{\mathcal {N}} \to \mathbb {R}\) for 1≤k≤K
_{
i
}, and 1≤i≤I as
$$ r_{i,k}(v;\mu,\sigma) =  \mathcal{B}_{i}(\psi_{i,k}+\tilde b_{i,k}(\mu,\sigma),v;\mu,\sigma), \quad \forall v \in X_{i;0}^{\mathcal{N}}, $$
and the error bound [4]
$$\b_{i,k}(\mu,\sigma)  \tilde b_{i,k}(\mu,\sigma)\_{X,i} \leq \widetilde\Delta_{i,k}(\mu,\sigma) = \frac{\mathcal{R}_{i,k}(\mu,\sigma)}{\alpha^{\text{LB}}_{i}(\mu,\sigma)}, $$
where
$$\mathcal{R}_{i,k}(\mu,\sigma) = \sup_{v \in X^{\mathcal{N}}_{i;0}} \frac{r_{i,k}(v;\mu,\sigma)}{\v\_{X,i}}$$
is the dual norm of the residual, and \(\alpha ^{\text {LB}}_{i}(\mu,\sigma)\) is a lower bound for the coercivity constant
$$\alpha_{i}(\mu,\sigma) = \inf_{w \in X_{i;0}^{\mathcal{N}}}\frac{\mathcal{B}_{i}(w,w;\mu,\sigma)}{\w\^{2}_{X,i}}, $$
that can be derived by hand for simple cases, or computed using a successive constraint linear optimization method [18].
We now assume that Hypothesis A.1 holds. Suppose we have found σ
_{
n
}, the n
^{th} “shift” such that \(\widetilde {\mathbb {B}}(\mu,\sigma _{n})\) has a zero eigenvalue, i.e. we have \(\widetilde {\overline {\tau }}_{n}(\mu,\sigma _{n})=0\). Then our RBbased approximation to the n
^{th} eigenvalue is \(\tilde {\lambda }_{n}(\mu) = \sigma _{n}\). We will now develop a first order error estimator for \(\overline {\tau }_{n}(\mu,\sigma _{n})\). We have
$$\mathbb{B}(\mu,\sigma_{n})\mathbb{V}(\mu,\sigma_{n}) = \overline{\tau}_{n}(\mu,\sigma_{n})\mathbb{A}(\mu,\sigma_{n})\mathbb{V}(\mu,\sigma_{n}), $$
and hence with \(\mathbb {B}(\mu,\sigma _{n}) \equiv \widetilde {\mathbb {B}}(\mu,\sigma _{n})+\delta \mathbb {B}(\mu,\sigma _{n})\), \(\mathbb {A}(\mu,\sigma _{n}) \equiv \widetilde {\mathbb {A}}(\mu,\sigma _{n})+\delta \mathbb {A}(\mu,\sigma _{n})\), \({\mathbb {V}}(\mu,\sigma _{n}) \equiv \widetilde {\mathbb {V}}(\mu,\sigma _{n})+\delta \mathbb {V}(\mu,\sigma _{n})\), we obtain
$$\begin{array}{*{20}l} &(\widetilde{\mathbb{B}}(\mu,\sigma_{n})+\delta\mathbb{B}(\mu,\sigma_{n}))(\widetilde{\mathbb{V}}(\mu,\sigma_{n}) +\delta{\mathbb{V}}(\mu,\sigma_{n})) =\\ &\overline{\tau}_{n}(\mu,\sigma_{n})(\widetilde{\mathbb{A}}(\mu,\sigma_{n})+\delta{\mathbb{A}}(\mu,\sigma_{n})) (\widetilde{{\mathbb{V}}}(\mu,\sigma_{n})+\delta{\mathbb{V}}(\mu,\sigma_{n})). \end{array} $$
((39))
Expansion of the above expression yields
$$\begin{array}{*{20}l} &\widetilde{\mathbb{B}}(\mu,\sigma_{n})\delta{\mathbb{V}}(\mu,\sigma_{n}) + \delta{\mathbb{B}}(\mu,\sigma_{n})\widetilde{{\mathbb{V}}}(\mu,\sigma_{n}) + \delta{\mathbb{B}}(\mu,\sigma_{n})\delta\mathbb{V}(\mu,\sigma_{n}) = \\ &\overline{\tau}_{n}(\mu,\sigma_{n})(\widetilde{\mathbb{A}}(\mu,\sigma_{n})\widetilde{{\mathbb{V}}}(\mu,\sigma_{n}) + \widetilde{\mathbb{A}}(\mu,\sigma_{n})\delta\mathbb{V}(\mu,\sigma_{n}) +\\& \delta{\mathbb{A}}(\mu,\sigma_{n})\widetilde{{\mathbb{V}}}(\mu,\sigma_{n}) + \delta{\mathbb{A}}(\mu,\sigma_{n})\delta{\mathbb{V}}(\mu,\sigma_{n})), \end{array} $$
((40))
where the identity \(\widetilde {\mathbb {B}}(\mu,\sigma _{n})\widetilde {\mathbb {V}}(\mu,\sigma _{n}) = 0\) has been employed. We then multiply through by \(\widetilde {\mathbb {V}}(\mu,\sigma _{n})^{T}\) and note that
$$\begin{array}{*{20}l} &\widetilde{{\mathbb{V}}}(\mu,\sigma_{n})^{T}\widetilde{\mathbb{B}}(\mu,\sigma_{n})\delta\mathbb{V}(\mu,\sigma_{n}) =\delta\mathbb{V}(\mu,\sigma_{n})^{T}\widetilde{\mathbb{B}}(\mu,\sigma_{n})\widetilde{\mathbb{V}}(\mu,\sigma_{n})= 0,\\ &\widetilde{{\mathbb{V}}}(\mu,\sigma_{n})^{T}\widetilde{\mathbb{A}}(\mu,\sigma_{n})\widetilde{{\mathbb{V}}}(\mu,\sigma_{n})=1 \end{array} $$
and neglect higher order terms to obtain
$$ \overline{\tau}_{n}(\mu,\sigma_{n}) \approx \widetilde{\mathbb{V}}(\mu,\sigma_{n})^{T}\delta\mathbb{B}(\mu,\sigma_{n})\widetilde{\mathbb{V}}(\mu,\sigma_{n}). $$
((41))
We then have the following bound
$$\begin{array}{*{20}l} \widetilde{\mathbb{V}}(\mu&,\sigma_{n})^{T}\delta\mathbb{B}(\mu,\sigma_{n})\widetilde{\mathbb{V}}(\mu,\sigma_{n}) \\ &\leq \sum_{i=1}^{I} \sum_{k=1}^{K_{i}} \sum_{j=1}^{I} \sum_{l=1}^{K_{j}}\widetilde{\mathbb{V}}_{\mathcal{P}_{i}(k)}(\mu,\sigma_{n})\widetilde\Delta_{i,k}(\mu,\sigma_{n})\widetilde\Delta_{j,l}(\mu,\sigma_{n})\widetilde{\mathbb{V}}_{\mathcal{P}_{j}(l)}(\mu,\sigma_{n}) \\ &\equiv \widetilde\Delta(\mu,\sigma_{n}). \end{array} $$
((42))
From Proposition 2.1 part (iii), we can only infer eigenvalues of (1),(2) when \(\overline {\tau }_{n}(\mu,\sigma) = 0\), hence (42) does not give us a direct bound on the error of \(\tilde {\lambda }_{n}(\mu)\). However, with the assumption that \(\widetilde \Delta (\mu,\sigma _{n}) \to 0\) in the limit as N→∞, we see that \(\overline {\tau }_{n}(\mu,\sigma _{n}) \to 0\) and hence asymptotically we have that \(\tilde {\lambda }_{n}(\mu)\) converges to λ
_{
n
}(μ). Moreover, we can develop an asymptotic error estimator. From Proposition A.1, we have
$$\begin{array}{*{20}l} \overline{\tau}_{n}(\mu,\tilde\lambda_{n}(\mu))&\approx \overline{\tau}_{n}(\mu,\lambda_{n}(\mu)) + (\tilde\lambda_{n}(\mu)\lambda_{n}(\mu))\frac{\partial\overline{\tau}_{n}(\mu,\lambda_{n}(\mu))}{\partial\sigma} \\ &= \frac{\lambda_{n}(\mu)\tilde\lambda_{n}(\mu)}{\lambda_{n}(\mu)}. \end{array} $$
((43))
Combining (42) and (43) gives the following asymptotic (relative) error estimator
$$ \frac{\lambda_{n}(\mu)\tilde\lambda_{n}(\mu)}{\lambda_{n}(\mu)} \lesssim \widetilde\Delta(\mu,\sigma_{n}). $$
((44))
Port reduction
Empirical mode construction
In practice, for the basis functions of the port space \(Z_{p}^{\mathcal {N}}\), we use a simple Laplacian eigenmode decomposition, corresponding to the eigenfunctions ζ
_{
p,k
} of the following eigenproblem
$$\begin{array}{*{20}l} \int_{\Gamma_{p}} \nabla \zeta_{p,k} \cdot \nabla v = \Lambda_{p,k} \int_{\Gamma_{p}} \zeta_{p,k} v,\quad\forall v\in Z_{p}^{\mathcal{N}},\quad 1\leq k \leq {\mathcal{N}}_{p}^{\Gamma}. \end{array} $$
((45))
We can truncate the Laplacian eigenmode expansion in order to reduce \({\mathcal {N}}_{p}^{\Gamma }\) – often without any significant loss in accuracy of the method. However, we can obtain better results by tailoring the port basis functions to a specific class of problems. A strategy for the construction of such empirical port modes is presented in [16]. We briefly describe this strategy here and refer the reader to [16] for further detail.
A key observation is that, in a system of components, the solution on any given interior global port is “only” influenced by the parameter dependence of the two components that share this port and the solution on the nonshared ports of these two components. We shall exploit this observation to explore the solution manifold associated with a given port through a pairwise training algorithm.
To construct the empirical modes we first identify groups of local ports on the components which may interconnect; the port spaces for all ports in each such group must be identical. For each pair of local ports within each group (connected to form a global port Γ
_{
p
}), we execute Algorithm (1): we sample this I=2 component system many (N
_{samples}) times for random (typically uniformly or loguniformly distributed) parameters over the parameter domain and for random boundary conditions on nonshared ports. For each sample we extract the solution on the shared port Γ
_{
p
}; we then subtract its average and add the resulting zeromean function to a snapshot set S
_{pair}. Note that by construction all functions in S
_{pair} are thus orthogonal to the constant function.
Upon completion of Algorithm 1 for all possible component connectivity within a library, we form a larger snapshot set S
_{group} which is the union of all the snapshot sets S
_{pair} generated for each pair. We then perform a data compression step: we invoke proper orthogonal decomposition (POD) [19] (with respect to the L
^{2}(Γ
_{
p
}) inner product). The output from the POD procedure is a set of mutually L
^{2}(Γ
_{
p
})orthonormal empirical modes that have the additional property that they are orthogonal to the constant mode.
Note that each POD compression step is done on a possibly large dataset of vectors, but for vectors of small size equal to the number of dofs of a given 2D port (for example the square port in Figure 3). Hence the POD procedure described here is computationally cheap, unlike POD for datasets of full 3D solution fields.
Portreduced system
In practice we use SCRBE – RB approximations for the bubble functions – but as we will see in the result section, the error introduced by RB approximation is very small and negligible compared to the error due to port reduction. As a consequence, we describe the port reduction procedure starting from the “truth” static condensation system (28), but we will in practice apply the port reduction to the SCRBE system (33). We recall that on port p the full port space is given as
$$\begin{array}{*{20}l} Z_{p}^{\mathcal{N}} = \left\{\zeta_{p,1},\cdots,\zeta_{p,\mathcal{N}_{p}^{\Gamma}}\right\}. \end{array} $$
((46))
For each port, we shall choose a desired port space dimension n
_{A,p
} such that \(1\leq n_{\mathrm {A},p} \leq {\mathcal {N}}_{p}^{\Gamma }\). We shall then consider the basis functions ζ
_{
k
}, 1≤k≤n
_{A,p
}, as the active port modes (hence subscript _{A}); we consider the \(n_{\mathrm {I},p} = {\mathcal {N}}_{p}^{\Gamma }  n_{\mathrm {A},p}\) remaining basis functions ζ
_{
k
}, \(n_{\mathrm {A},p}+1 \leq k \leq {\mathcal {N}}_{p}^{\Gamma }\), as inactive (hence subscript _{I}). Note that \(\text {span}\{\zeta _{p,1},\dotsc,\zeta _{p,n_{\mathrm {A},p}}\} \subseteq Z_{p}^{\mathcal {N}}\). We then introduce
$$\begin{array}{*{20}l} n_{\mathrm{A}} \equiv \sum_{p = 1}^{n^{\Gamma}} n_{{ \mathrm{A}},p}^{\Gamma},\qquad n_{\mathrm{I}} \equiv \sum_{p = 1}^{n^{\Gamma}} n_{{\mathrm{I}},p}^{\Gamma}, \end{array} $$
((47))
as the number of total active and inactive port modes, respectively; and n
_{SC}=n
_{A}+n
_{I} is the total number of port modes in the nonreduced system.
Next, we assume a particular ordering of the degrees of freedom in (28): we first order the degrees of freedom corresponding to the n
_{A} active system port modes and then by the degrees of freedom corresponding to the n
_{I} inactive system port modes. We may then interpret (28) as
$$ \left[ \begin{array}{cc} \mathbb{B}_{\text{AA}}(\mu,\sigma) & \mathbb{B}_{\text{AI}}(\mu,\sigma) \\ \mathbb{B}_{\text{IA}}(\mu,\sigma) & \mathbb{B}_{\text{II}}(\mu,\sigma) \end{array} \right] \mathbb{V}(\mu,\sigma) = \overline{\tau}(\mu,\sigma) \left[\begin{array}{cc} \mathbb{A}_{\text{AA}}(\mu,\sigma) & \mathbb{A}_{\text{AI}}(\mu,\sigma) \\ \mathbb{A}_{\text{IA}}(\mu,\sigma) & \mathbb{A}_{\text{II}}(\mu,\sigma) \end{array}\right] \mathbb{V}(\mu,\sigma), $$
((48))
where the four blocks in the matrices correspond to the various couplings between active and inactive modes; note that \(\mathbb {B}_{\text {AA}}(\mu)\in \mathbb {R}^{n_{\mathrm {A}} \times n_{\mathrm {A}}}\) and that \(\mathbb {B}_{\text {II}}(\mu)\in \mathbb {R}^{n_{\mathrm {I}}\times n_{\mathrm {I}}}\). Our portreduced approximation \(\widehat {\overline {\tau }}(\mu,\sigma)\) shall be given as the solution to the n
_{A}×n
_{A} system
$$\begin{array}{*{20}l} \mathbb{B}_{\text{AA}}(\mu,\sigma)\mathbb{V}_{\mathrm{A}}(\mu,\sigma) &= \widehat{\overline{\tau}}(\mu,\sigma)\mathbb{A}_{\text{AA}}(\mu,\sigma)\mathbb{V}_{\mathrm{A}}(\mu,\sigma),\\ \mathbb{V}_{\mathrm{A}}(\mu,\sigma)^{T}\mathbb{A}_{\text{AA}}(\mu,\sigma)\mathbb{V}_{\mathrm{A}}(\mu,\sigma) &= 1 \end{array} $$
((49))
in which we may discard the (presumably large) \(\mathbb {B}_{\text {II}}(\mu,\sigma)\) and \(\mathbb {A}_{\text {II}}(\mu,\sigma)\) blocks; however the \(\mathbb {B}_{\text {IA}}(\mu,\sigma)\)block is required later for residual evaluation in the context of a posteriori error estimation.
Port reduction error estimator
We put a \(\enspace \widehat \cdot \enspace \) on top of all the port reduced quantities. In this section only we will use Hypothesis A.1 in order to derive error estimates, but note that the port reduction procedure does not require this assumption. Suppose we have found σ
_{
n
} such that \(\widehat {\overline {\tau }}_{n}(\mu,\sigma _{n})=0\) with eigenvector of size n
_{SC} in the nonreduced space
$$\widehat{\mathbb{V}}_{n}(\mu,\sigma_{n})= \left[\begin{array}{c} \mathbb{V}_{\mathrm{A},n}(\mu,\sigma_{n}) \\ 0 \end{array}\right]. $$
We can expand \(\widehat {\mathbb {V}}_{n}(\mu,\sigma _{n})\) in terms of the eigenvectors \(\mathbb {V}_{m}(\mu,\sigma _{n})\) of the non reduced space
$$\widehat{\mathbb{V}}_{n}(\mu,\sigma_{n}) = \sum_{m=1}^{n_{\text{SC}}} \alpha_{m}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n}). $$
Since \(\widehat {\overline {\tau }}_{n}(\mu,\sigma _{n})=0\), we can reasonably assume that \(\overline {\tau }_{n}(\mu,\sigma _{n})=\min \limits _{1\leq m \leq n_{\text {SC}}}\overline {\tau }_{m}(\mu,\sigma _{n})\). We now look at the following residual
$$\begin{array}{*{20}l} \mathbb{B}(\mu,\sigma_{n}) \widehat{\mathbb{V}}_{n}(\mu,\sigma_{n}) &= \sum_{m=1}^{n_{\text{SC}}} \alpha_{m}(\mu,\sigma_{n}) \mathbb{B}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n})\\ &= \sum_{m=1}^{n_{\text{SC}}} \alpha_{m}(\mu,\sigma_{n}) \overline{\tau}_{m}(\mu,\sigma_{n}) \mathbb{A}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n}), \end{array} $$
so using the \(\mathbb {A}(\mu,\sigma _{n})\) orthogonality of the \(\mathbb {V}_{m}(\mu,\sigma _{n})\) we obtain
$$\begin{array}{*{20}l} \\mathbb{B}(\mu,\sigma_{n}) &\widehat{\mathbb{V}}_{n}(\mu,\sigma_{n})\^{2}_{\mathbb{A}(\mu,\sigma_{n})^{1}} \\ & = \sum_{m=1}^{n_{\text{SC}}} \overline{\tau}_{m}(\mu,\sigma_{n})^{2} \\alpha_{m}(\mu,\sigma_{n}) \mathbb{A}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n}) \^{2}_{\mathbb{A}(\mu,\sigma_{n})^{1}} \\ & \geq \overline{\tau}_{n}(\mu,\sigma_{n})^{2} \sum_{m=1}^{n_{\text{SC}}} \\alpha_{m}(\mu,\sigma_{n}) \mathbb{A}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n}) \^{2}_{\mathbb{A}(\mu,\sigma_{n})^{1}} \\ & = \overline{\tau}_{n}(\mu,\sigma_{n})^{2} \ \sum_{m=1}^{n_{\text{SC}}} \alpha_{m}(\mu,\sigma_{n}) \mathbb{A}(\mu,\sigma_{n}) \mathbb{V}_{m}(\mu,\sigma_{n}) \^{2}_{\mathbb{A}(\mu,\sigma_{n})^{1}} \\ & = \overline{\tau}_{n}(\mu,\sigma_{n})^{2}, \end{array} $$
where we use the Euclidean norm derived from the \(\mathbb {A}(\mu,\sigma _{n})^{1}\) scalar product. We thus obtain the following error bound
$$\begin{array}{*{20}l} \widehat{\Delta}(\mu,\sigma_{n})\equiv\\mathbb{B}(\mu,\sigma_{n}) \widehat{\mathbb{V}}_{n}(\mu,\sigma_{n})\_{\mathbb{A}(\mu,\sigma_{n})^{1}} \geq \overline{\tau}_{n}(\mu,\sigma_{n}). \end{array} $$
Finally, we recover an error estimator for the eigenvalue λ
_{
n
}(μ) of the original eigenproblem. Assuming \(\widehat \lambda _{n}(\mu)\) is close to λ
_{
n
}(μ), we can then use Proposition A.1 as in (43), and we get the relative error estimator
$$\frac{\lambda_{n}(\mu)\widehat\lambda_{n}(\mu)}{\lambda_{n}(\mu)} \lesssim \widehat\Delta(\mu,\sigma_{n}). $$
It is important to note that \(\widehat \Delta (\mu,\sigma _{n})\) will only decrease linearly in the residual, whereas the actual eigenvalue error is expected to decrease quadratically in the residual. This is due to the fact that port reduction can be viewed as a Galerkin approximation over a subspace of the skeleton space \(X_{\mathcal {S}}(\mu,\sigma)\), and in that framework several a priori and a posteriori error results demonstrate the quadratic convergence of the eigenvalue [20]. As a consequence the effectivity of the error estimator \(\widehat \Delta (\mu,\sigma _{n})\) is expected to degrade as n
_{A,p
} gets larger.
Note that
$$\mathbb{B}(\mu,\sigma_{n}) \widehat{\mathbb{V}}_{n}(\mu,\sigma_{n})=\left[ \begin{array}{c} 0 \\ \mathbb{B}_{\text{IA}}(\mu,\sigma_{n})\mathbb{V}_{\mathrm{A},n}(\mu,\sigma_{n}) \end{array}\right], $$
and so the computation of the residual requires the additional assembly of \(\mathbb {B}_{\text {IA}}(\mu,\sigma _{n})\), which does not generate an important extra computation since in practice we will consider n
_{A}≪n
_{I}. On the contrary, the computation of the norm \(\\cdot \_{\mathbb {A}(\mu,\sigma _{n})^{1}}\) requires the assembly and inversion of \(\mathbb {A}(\mu,\sigma _{n})\), the full Schur complement stiffness matrix, which would potentially eliminate any speedup obtained by the port reduction. This computational issue is resolved by using an upper bound for \(\\cdot \_{\mathbb {A}(\mu,\sigma _{n})^{1}}\) which is based on a nonconforming version \(\mathbb {A}'(\mu,\sigma _{n})\) of the stiffness operator and a parameter independent preconditioner: the former permits online computation of small matrix inverses locally on each component, and the latter allows us to precompute nonreduced matrices and their Cholesky decompositions in an offline stage. The entire procedure is described in detail in [16].
Computational aspects
In this section, we summarize the main steps of the method from a computational point of view. There are two clearly separated stages. The “Offline” stage involves heavy precomputations and is performed only once. The “Online” stage corresponds to the actual solution of the eigenproblem and can be performed many times for various parameters μ and different eigenvalue targets. The “Online” computations are very fast thanks to our approach and allow to solve eigenproblems in a many query context such as model optimization or design.
Offline computations
In the Offline stage, we already have some knowledge about the class of eigenproblems we will have to solve. We know the bilinear forms a and m corresponding to the stiffness and mass operators. We have a predefined library of archetype components that will be allowed to be connected together at compatible ports to form bigger systems that will be considered in the Online stage. See Figure 3 for an example of library, and Figure 6 for an example of system obtained from component assembly. Note that each archetype component in the library is allowed to have some parametric variability.
For each port type corresponding to a possible connection between archetype components, we perform the following computations:
For each archetype component, we perform the following computations:

Compute the harmonic extension of the port modes inside the archetype component reference domain to get the interface functions.

For each interface function, compute a reduced basis space for the bubble Eq. 16. Each RB space is tuned for the stiffness and mass operators, as well as the component parametric variability and the shift σ variability.

Precompute some component quantities used in (35), (36), that will be ready in the Online stage for system assembly.
Online stage
System assembly. In the Online stage, we form a component assembly by instantiating I components from our library of archetype components, and connecting them together. Several instantiated components can correspond to the same archetype component, but with possibly different parameter values. Each instantiated component i has a set of parameter values μ
_{
i
}, and the whole system has a set of parameters μ=∪_{
i=1..I
}
μ
_{
i
}. We also define a value of σ for the whole system.
For each instantiated component i, we perform the following computations:

Compute the RB approximations of the bubble functions for parameter values (μ
_{
i
},σ).

Compute the component stiffness and mass matrices (35), (36).
At the system level, we perform the following computations:

Assemble the system (33) for parameter values (μ,σ), using the component matrices (35), (36) previously computed for each instantiated component.
Eigenvalue computation. At this point, we now need to find the values of σ for which the system (33) has a zero eigenvalue. We proceed by fixing an eigenvalue number n and we then follow Algorithm 2 with tolerance δ≪1.
Applying this algorithm for n=1,2,3,… we can recover the first eigenvalues of the component assembly. In practice Brent’s method [21] applied to the search of σ such that \(\widetilde {\overline {\tau }}_{n}(\mu,\sigma)=0\) converges in about 10 iterations, and there is only a single root for the function \(\sigma \mapsto \widetilde {\overline {\tau }}_{n}(\mu,\sigma)\).
Once an approximation \(\widetilde {\lambda _{n}}(\mu)=\sigma \) of the eigenvalue has been found, we obtain an associated eigenvector following (33). Note that we use a standard eigensolver from the SLEPc library [22] as a black box, hence we have no control on the eigenvector computation, especially when the eigenvalue multiplicity is two or more.
Remark 5.1.
The parametric dependence comes into play in the Online stage when the RB bubble functions are computed, as they depend on (μ,σ). As a consequence, the resulting shifted system depends on (μ,σ), and also its eigenvalues \(\widetilde {\overline {\tau }}_{n}(\mu,\sigma)\). The vector of parameters μ is chosen by the user for the whole system (material properties of the different components, geometry), while σ is automatically updated at each step of Algorithm 2: as a result, the RB bubble functions have to be recomputed at each step of Algorithm 2. In the end though, we obtain an approximation \(\widetilde {\lambda _{n}}(\mu)\) that depends only on μ, the “natural” parameters of the original system. The user is then free to modify the system by choosing a different vector of parameters μ
^{′}, and restart Algorithm 2.