Having stated the equations that represent the physical problem and the standard model reduction approach, we now describe the FE-ROM approximation of the problem.

In this section, let us denote as the functional continuous space where \(\varvec{Y}\) exist. Instead of following a standard Galerkin approximation of the variational problem—where the FE space is denoted as —we construct the approximation space for the ROM. The FE space is assumed to be built from a FE partition of the domain \(\Omega \). The order of the FE interpolation is irrelevant for our discussion.

Introducing the notation \((:,:)=(:,:)_\Omega \) and \((:,:)_\Gamma \), for the \(L^2\)-inner product on \(\Omega \) and \(\Gamma \) respectively, or for the integral of the product of two functions—if they are not square integrable but their product can be integrated, we can define the variational problem as finding , such that:

where \(\varvec{\Upsilon }_r\) is the test function and the problem is written using the forms *B* and *L* defined as:

$$\begin{aligned} B(\varvec{Y};\varvec{Y}_r,\varvec{\Upsilon }_r) {:}{=}&\left( \varvec{A}_i^c(\varvec{Y}) \partial _i{\varvec{Y}_r}, \varvec{\Upsilon }_r \right) - ( \varvec{A}_i^f(\varvec{Y}) \varvec{Y}_r,\partial _i{\varvec{\Upsilon }_r} ) \nonumber \\&+\, \left( \varvec{K}_{ij}(\varvec{Y}) \partial _j{\varvec{Y}_r},\partial _i{\varvec{\Upsilon }_r} \right) + \left( \varvec{S}(\varvec{Y}) \varvec{Y}_r,\varvec{\Upsilon }_r \right) , \end{aligned}$$

(11)

$$\begin{aligned} L(\varvec{\Upsilon }_r) {:}{=}&\left( \varvec{F},\varvec{\Upsilon }_r \right) + \left( \hat{\varvec{T}}_n,\varvec{\Upsilon }_r \right) _\Gamma . \end{aligned}$$

(12)

Note that the terms where \(\varvec{Y}\) is not approximated by \(\varvec{Y}_r\) in Eq. 11 involve non-linearities. In a further section we discuss the linearization scheme and two ways of approximating these terms.

### VMS for FE-ROM

Given the well-known lack of stability in the Galerkin standard formulation—present in convective dominated regimes—a stabilization technique is necessary. Inspired in previous works that acknowledge instability issues [17, 18] and using a VMS framework as done for several problems in FE approximations, we develop what we call FE-ROM-Subgrid-Scales (SGS), which resembles FE-SGS.

We start by splitting the unknown of the continuous problem \(\varvec{Y}\) in a part \(\varvec{Y}_r\), which can be resolved by the FE-ROM standard approximation; and a remainder \(\breve{\varvec{Y}}\), the FE-ROM-SGS. In this way we can re-define the approximation of the continuous space as , where the SGS space is any space that completes in . Then, the variational form of the problem in Eq. 10 expands into two: finding and , such that:

Since it is desirable to avoid derivatives over the subscale \(\breve{\varvec{Y}}\), we can re-write the bilinear form \(B(\varvec{Y};\breve{\varvec{Y}},\varvec{\Upsilon }_r)\) in Eq. 13a by integrating by parts within each element and assuming that the exact tractions are continuous across inter-element boundaries, yielding:

where \((:,:)_K\) is the \(L^2\) inner product over element *K*, and \(\mathcal {L}^*\) is the adjoint of the lineal operator \(\mathcal {L}(\varvec{Y};:)\). Following the formulation in [12], this adjoint operator \(\mathcal {L}^*\) is:

$$\begin{aligned} \mathcal {L}^*(\varvec{Y};\varvec{\Upsilon }_r) {:}{=}&\, \partial _t{(\varvec{M} (\varvec{Y}) \varvec{\Upsilon }_r)} - \varvec{A}_j^{f\top }(\varvec{Y}) \partial _j{ \varvec{\Upsilon }_r} \nonumber \\&-\, \varvec{A}_j^{c\top }(\varvec{Y}) \partial _j{ \varvec{\Upsilon }_r} - \partial _{j}{\varvec{K}_{ij}^\top (\varvec{Y}) \partial _{i}{\varvec{\Upsilon }_r}} + \varvec{S}^\top (\varvec{Y})\varvec{\Upsilon }_r. \end{aligned}$$

(15)

### Subscales approximation

At this point, we have described two sets of equations: a FE-ROM one, in which all the terms have been described; and a FE-ROM-SGS one. In order to solve the second system, we follow a standard VMS formulation. After doing integration by parts in some terms, neglecting the tractions across inter-element boundaries, and replacing \(\varvec{u} \cdot \nabla \rho \) by \(-\frac{\rho \varvec{u}}{T} \cdot \nabla T\); we can re-write Eq. 13b as:

with the residual defined as \(\varvec{R}(\varvec{Y},\varvec{Y}_r) = \varvec{F} - \varvec{M}(\varvec{Y})\partial _t{\varvec{Y}_r} - \breve{\mathcal {L}}(\varvec{Y};\varvec{Y}_r)\), and the linear operator defined as \(\breve{\mathcal {L}} (\varvec{Y};\varvec{Y}_r) {:}{=}\partial _i{(\varvec{A}_i^c(\varvec{Y}) \varvec{Y}_r)} + \breve{\varvec{A}}_i^f(\varvec{Y}_r) \partial _i{\varvec{Y}} - \partial _i{ \left( \varvec{K}_{ij}(\varvec{Y}) \partial _j{\varvec{Y}_r} \right) } + \varvec{S}(\varvec{Y})\varvec{Y}_r\), with:

$$\begin{aligned} \breve{\varvec{A}}_i^f = \begin{bmatrix} \varvec{0} \otimes \varvec{0}&\varvec{e}_i&\varvec{0} \\ \rho \varvec{e}_i^\top&0&\frac{\rho u_i}{T} \\ \varvec{0}^\top&0&0 \end{bmatrix}. \end{aligned}$$

(17)

We can re-write Eq. 16 in the following way:

where \(\varvec{\Upsilon }_r^\perp \) is a term that ensures that Eq. 18 belongs to and its definition depends on the choice of .

Now, using the algebraic approximation \(\breve{\mathcal {L}}(\varvec{Y};\breve{\varvec{Y}}) \approx \varvec{\tau }^{-1}(\varvec{Y})\breve{\varvec{Y}}\), we can re-write Eq. 18 as:

where \(\varvec{\tau }\) is the matrix of stabilization parameters that depends on *K* and the coefficients of the operator \(\breve{\mathcal {L}}\). It is important to notice that so far the matrix \(\varvec{\tau }\) does not depend on the choice of the approximation space , and therefore we can use the same definition of the stabilization parameters as in the FE approximation of the problem. In this case, we use the definition of the algebraic operator \(\varvec{\tau }\) given by [2, 13]:

$$\begin{aligned} \varvec{\tau }^{-1}(\varvec{Y})= \begin{bmatrix} \tau ^{-1}_m(\varvec{Y})\varvec{I}&\varvec{0}&\varvec{0} \\ \varvec{0}^\top&\tau ^{-1}_c(\varvec{Y})&0 \\ \varvec{0}^\top&0&\tau ^{-1}_e(\varvec{Y}) \end{bmatrix}, \end{aligned}$$

(20)

with the stabilization parameters defined as:

$$\begin{aligned} \tau ^{-1}_m&= c_1 \frac{\mu }{h^2} + c_2 \frac{\rho |\varvec{u}|}{h}, \end{aligned}$$

(21a)

$$\begin{aligned} \tau ^{-1}_c&= \frac{c_1 \rho \tau _m}{h^2}, \end{aligned}$$

(21b)

$$\begin{aligned} \tau ^{-1}_e&= c_1 \frac{\lambda }{h^2} + c_2 \frac{\rho c_p |\varvec{u}|}{h}, \end{aligned}$$

(21c)

where *h* is the element size and \(c_1\) and \(c_2\) are algorithmic constants.

To complete the approximation of the subscales, it still remains to define the term \(\varvec{\Upsilon }_r^\perp \), that is, to choose the appropriate subscales space . Based on the orthogonality property of the basis \(\phi \) and on the work developed in [13, 19], we define the subscales space , which can be thought as being orthogonal to the approximation space , that implies having:

To numerically compute \(\varvec{\Upsilon }_r^\perp \) we follow the supposition made in [20], where the orthogonality between and is defined respect to the weighted inner product \((\varvec{Y},\varvec{\Upsilon })_{\varvec{M}} = (\varvec{Y},\varvec{M}(\varvec{Y}) \varvec{\Upsilon })\). Following this condition any subscale \(\breve{\varvec{Y}}\) must satisfy:

Now replacing the orthogonality condition (Eq. 23) in Eq. 19, we obtain:

from where it follows that \(\varvec{\Upsilon }_r^\perp \) is the following projection onto the space with respect to \(L^2\)-inner product, denoted by \(\Pi _r\):

$$\begin{aligned} \varvec{\Upsilon }_r^\perp = \Pi _r \left( -\varvec{R}(\varvec{Y},\varvec{Y}_r) + \varvec{\tau }^{-1}(\varvec{Y})\breve{\varvec{Y}} \right) . \end{aligned}$$

(25)

Replacing \(\varvec{\Upsilon }_r^\perp \) into Eq. 19, and using the approximation \(\Pi _r \left( \varvec{\tau }^{-1}(\varvec{Y})\breve{\varvec{Y}} \right) = \varvec{M}^{-1}(\varvec{Y}) \varvec{\tau }^{-1}(\varvec{Y}) \Pi _r \left( \varvec{M}(\varvec{Y}) \breve{\varvec{Y}} \right) = \varvec{0}\) presented in [20], we find an expression for the FE-ROM-SGS:

$$\begin{aligned} \partial _t{(\varvec{M}(\varvec{Y}) \breve{\varvec{Y}})} + \varvec{\tau }^{-1}(\varvec{Y})\breve{\varvec{Y}} = \Pi _r^\perp \left( \varvec{R}(\varvec{Y},\varvec{Y}_r) \right) , \end{aligned}$$

(26)

with \(\Pi _r^\perp {:}{=}\varvec{I}-\Pi _r\) defined as the orthogonal projection on , where \(\varvec{I}\) is now the identity in .

Finally, by introducing the orthogonality condition (Eq. 23) in Eq. 14, neglecting the term \(\varvec{M}(\varvec{Y}) \partial _t{\varvec{Y}}\) in the residual \(\varvec{R}\) of Eq. 26—due to its orthogonality to given by \(\left( \varvec{M}(\varvec{Y}) \partial _t{\varvec{Y}}, \breve{\varvec{\Upsilon }} \right) = 0\), and rearranging Eqs. 14 and 26, we can state the problem in Eq. 13 as finding and , such that:

$$\begin{aligned}&(\varvec{M}(\varvec{Y}) \partial _t{\varvec{Y}_r},\varvec{\Upsilon }_r ) + B (\varvec{Y};\varvec{Y}_r,\varvec{\Upsilon }_r )\,+\, \sum _K \left( \breve{\varvec{Y}},\mathcal {L}^*(\varvec{Y};\varvec{\Upsilon }_r) \right) _K = L(\varvec{\Upsilon }_r ), \end{aligned}$$

(27a)

$$\begin{aligned}&\partial _t{(\varvec{M}(\varvec{Y}) \breve{\varvec{Y}})} + \varvec{\tau }^{-1}(\varvec{Y})\breve{\varvec{Y}} = \Pi _r^\perp \left( \varvec{R}(\varvec{Y},\varvec{Y}_r) \right) , \end{aligned}$$

(27b)

with the adjoint operator \(\mathcal {L}^*\) and the residual \(\varvec{R}\) redefined as:

$$\begin{aligned} \mathcal {L}^*(\varvec{Y};\varvec{\Upsilon }_r)&= \varvec{S}^\top (\varvec{Y}) \varvec{\Upsilon }_r - \varvec{A}_j^{f\top }(\varvec{Y}) \partial _j{ \varvec{\Upsilon }_r} - \varvec{A}_j^{c\top }(\varvec{Y}) \partial _j{ \varvec{\Upsilon }_r} - \partial _{j}{\varvec{K}_{ij}^\top (\varvec{Y}) \partial _{i}{\varvec{\Upsilon }_r}}, \\ \varvec{R}(\varvec{Y},\varvec{Y}_r)&= \varvec{F} - \breve{\mathcal {L}}(\varvec{Y};\varvec{Y}_r). \end{aligned}$$

### Remark

By following the same analysis performed when deriving the orthogonal SGS in [13, 19], we have come to a rather similar definition of the FE-ROM-SGS, where the most important difference lies in the definition of the orthogonal projection \(\Pi ^\perp \) in Eq. 26. In the FE case the projection is done onto the space , while in the FE-ROM approximation it is done onto the space .

### Remark

The choice of the stabilization parameters \(\varvec{\tau }\) is done following the Fourier analysis done in [13]. Since the information represented by the reduced basis corresponds to the resolved scales from the FOM, the subscales for both the FOM and the ROM are part of the continuous solution which cannot be approximated by the FOM.

### Remark

The previous definition of the FE-ROM-SGS is equivalent to the dynamic orthogonal SGS model in [20]; it is important to acknowledge that subscales could also be implemented without the temporal term (quasi-static), or not orthogonal to . An extensive analysis of the FE equivalent models is depicted in [12, 20].

### Time discretization

Any time integration scheme could now be applied to discretize in time the FE-ROM and FE-ROM-SGS equations together (Eq. 27). Considering the results in [21], where it is shown that the time integration for the subscales could be one order less accurate than for the finite element equations without affecting the accuracy of the scheme, we choose for this work two Backward Differentiation Formula (BDFs): of second order for the FE-ROM equations and of first order for the FE-ROM-SGS ones. Setting a uniform partition of the time interval of analysis \([0,t_f]\),with \(\delta t\) the time step and *n* the superscript that denotes the current time step, we can approximate the temporal derivatives in Eq. 27 using:

$$\begin{aligned} \partial _t{\varvec{Y}_r} |^{n+1}&\approx \frac{\delta \varvec{Y}_r^{n+1}}{\delta t} {:}{=}\frac{3 \varvec{Y}_r^{n+1} - 4 \varvec{Y}_r^{n} + \varvec{Y}_r^{n-1}}{2 \delta t}, \end{aligned}$$

(28a)

$$\begin{aligned} \partial _t{\breve{\varvec{Y}}}|^{n+1}&\approx \frac{\delta \breve{\varvec{Y}}^{n+1}}{\delta t} {:}{=}\frac{\breve{\varvec{Y}}^{n+1} - \breve{\varvec{Y}}^{n}}{\delta t}. \end{aligned}$$

(28b)

Replacing the time integration scheme (Eq. 28b) in Eq. 27b we get an equation for the subscales:

$$\begin{aligned} \breve{\varvec{Y}}^{n+1} = \varvec{\tau }_t(\varvec{Y}^{n+1}) \left( \Pi _r^\perp \left( \varvec{R}(\varvec{Y}^{n+1},\varvec{Y}_r^{n+1}) \right) + \varvec{M}(\varvec{Y}^{n+1}) \frac{\breve{\varvec{Y}}^{n}}{\delta t} \right) , \end{aligned}$$

(29)

with the matrix of effective stabilization parameters is defined as \(\varvec{\tau }_t (\varvec{Y}^{n+1}) {:}{=}\left( \frac{\varvec{M}(\varvec{Y}^{n+1})}{\delta t} + \varvec{\tau }^{-1} (\varvec{Y}^{n+1}) \right) ^{-1}\).

Now, replacing Eq. 29 and the integration scheme (Eq. 28a) in Eq. 27a, we get:

$$\begin{aligned}&\left( \varvec{M}(\varvec{Y}^{n+1}) \frac{\delta \varvec{Y}_r^{n+1}}{\delta t},\varvec{\Upsilon }_r \right) + B (\varvec{Y}^{n+1};\varvec{Y}_r^{n+1},\varvec{\Upsilon }_r ) \nonumber \\&\qquad + \sum _K \left( \varvec{\tau }_t(\varvec{Y}^{n+1}) \Pi _r^\perp \left( \varvec{R}(\varvec{Y}^{n+1},\varvec{Y}_r^{n+1}) \right) , \mathcal {L}^*(\varvec{Y}^{n+1};\varvec{\Upsilon }_r) \right) _K \nonumber \\&\qquad + \sum _K \left( \varvec{\tau }_t(\varvec{Y}^{n+1}) \varvec{M}(\varvec{Y}^{n+1}) \frac{\breve{\varvec{Y}}^{n}}{\delta t}, \mathcal {L}^*(\varvec{Y}^{n+1};\varvec{\Upsilon }_r) \right) _K = L(\varvec{\Upsilon }_r ), \end{aligned}$$

(30)

### Linearization

To solve the non-linearity present in the terms involving \(\varvec{Y}\), we implement a linearization scheme based on Picard’s method. Using the terminology used in [20], for each time step \(n+1\), we first solve Eq. 30 for iteration \(i+1\), where the non-linear terms can be approximated in two ways: as \(\varvec{Y}_r^{n+1,i}\), for linear subscales; or as \(\varvec{Y}_r^{n+1,i} + \breve{\varvec{Y}}^{n+1,i}\), for nonlinear subscales. Then we solve Eq. 29 for iteration \(j+1\), approximating the non-linear terms in the same way: by linear subscales (\(\varvec{Y}_r^{n+1,i+1}\)); or non-linear subscales (\(\varvec{Y}_r^{n+1,i+1} + \breve{\varvec{Y}}^{n+1,j}\)).

### Discrete approximation

We can describe the discrete representation of the FE-ROM problem as a composition of the FE and ROM approximations. In FE the space is defined as made of continuous piece-wise polynomial functions in the domain \(\Omega \), where we can write the discrete approximation of the unknown as \(\varvec{Y} \approx \varvec{Y}_h(\varvec{x},t) {:}{=}\sum \nolimits _{i=1}^n N(\varvec{x}^i) \varvec{Y}^{i}(t)\), with \(N(\varvec{x}^i)\) the shape function of node *i*. In contrast, in ROM we approximate the unknown \(\varvec{Y}\) as \(\varvec{Y}(t) \approx \bar{\varvec{Y}} + \sum \nolimits _{k=1}^m \varvec{\phi }^k Y^k (t)\).

Using these two approximations, we can describe the space in two ways: as a FE space , represented using orthogonal basis \(\phi \), or as a ROM approximation of the problem, discretized in \(\Omega \) using continuous piece-wise polynomial functions. In that way, we can write the discrete representation of \(\varvec{Y}_r\) as:

$$\begin{aligned} \varvec{Y}_r(\varvec{x},t) {:}{=}\sum \limits _{i=1}^n N(\varvec{x}^i) \left[ \bar{\varvec{Y}} + \sum \limits _{k=1}^m \varvec{\phi }^{k} (\varvec{x}^i) Y^{k}(t) \right] , \quad \forall \varvec{x} \in \Omega , \quad t>0. \end{aligned}$$

(31)

### Petrov–Galerkin projection

Since the formulation presented above introduces the non-symmetrical linear operators \(\mathcal {L}\) and \(\mathcal {L}^*\), it is necessary to find an optimal projection in a way that it gives us a feasible solution [14]. To solve this lack of optimality in the projection, we replace the traditional Galerkin projection by the Petrov–Galerkin projection defined in [7]. Let us re-write the linearized version of Eq. 30 as the linear system:

$$\begin{aligned} \varvec{\Phi }^\top \varvec{\mathsf {L}} \varvec{\Phi } \varvec{\mathsf {Y}}_r = \varvec{\Phi }^\top \varvec{\mathsf {R}} \end{aligned}$$

(32)

where \(\varvec{\Phi }\) is the discrete basis matrix, \(\varvec{\Phi }^\top \varvec{\mathsf {L}} \varvec{\Phi }\) and \(\varvec{\Phi }^\top \varvec{\mathsf {R}}\) are the discrete left and right hand sides of Eq. 30, and \(\varvec{\mathsf {Y}}_r\) the discrete unknown.

To apply in a natural way the Petrov–Galerkin projection we define \(\varvec{\mathsf {W}} = \varvec{\Phi }^\top \varvec{\mathsf {L}} \varvec{\Phi }\) as a matrix whose column vectors form a basis to the projection space that allows us to transform the left hand side of Eq. 32 into a positive semi-define matrix. Projecting Eq. 32 onto such space we get:

$$\begin{aligned} \varvec{\mathsf {W}}^\top \varvec{\Phi }^\top \varvec{\mathsf {L}} \varvec{\Phi } \varvec{Y}_r&= \varvec{\mathsf {W}}^\top \varvec{\Phi }^\top \varvec{\mathsf {R}}, \nonumber \\ \varvec{\Phi }^\top \varvec{\mathsf {L}}^\top \varvec{\Phi } \varvec{\Phi }^\top \varvec{\mathsf {L}} \varvec{\Phi } \varvec{\mathsf {Y}}_r&= \varvec{\Phi }^\top \varvec{\mathsf {L}}^\top \varvec{\Phi } \varvec{\Phi }^\top \varvec{\mathsf {R}}. \end{aligned}$$

(33)

Using the orthogonality property of the basis \(\varvec{\Phi } \varvec{\Phi }^\top = \varvec{\mathsf {I}}\), Eq. 33 becomes one that resembles the Petrov–Galerkin ROM formulations in [7, 9]:

$$\begin{aligned} \varvec{\Phi }^\top \varvec{\mathsf {L}}^\top \varvec{\mathsf {L}} \varvec{\Phi } \varvec{\mathsf {Y}}_r = \varvec{\Phi }^\top \varvec{\mathsf {L}}^\top \varvec{\mathsf {R}} \end{aligned}$$

(34)

### Hyper-ROM

Lastly, in order to reduce the computational cost of evaluating nonlinear terms, we propose a mesh-based hyper-ROM as an alternative to the sampling-based domain reduction algorithms [5,6,7,8,9,10].

The mesh-based hyper-ROM consists in the solution of the described ROM problem using a coarser mesh than the one of the FOM. The implementation of this technique is done straightforwardly by writing the discrete approximation (Eq. 31) in function of the new coarser mesh.

Ideally, the coarsening should be performed as a function of the ‘less important’ areas of the geometry, which can be achieved using already existing mesh refinement algorithms. In the subsequent examples we test this technique using a uniform coarsening of the mesh.

### Remark

When the POD basis is obtained by sampling a mesh-based solution—a FE one for example—the coarsening of the mesh implies an interpolation of such basis.