# Space–time adaptive hierarchical model reduction for parabolic equations

- Simona Perotto
^{1}Email author and - Alessandro Zilio
^{2}

**2**:25

**DOI: **10.1186/s40323-015-0046-4

© Perotto and Zilio. 2015

**Received: **10 February 2015

**Accepted: **17 September 2015

**Published: **13 October 2015

## Abstract

### Background

Surrogate solutions and surrogate models for complex problems in many fields of science and engineering represent an important recent research line towards the construction of the best trade-off between modeling reliability and computational efficiency. Among surrogate models, hierarchical model (HiMod) reduction provides an effective approach for phenomena characterized by a dominant direction in their dynamics. HiMod approach obtains 1D models naturally enhanced by the inclusion of the effect of the transverse dynamics.

### Methods

HiMod reduction couples a finite element approximation along the mainstream with a locally tunable modal representation of the transverse dynamics. In particular, we focus on the pointwise HiMod reduction strategy, where the modal tuning is performed on each finite element node. We formalize the pointwise HiMod approach in an unsteady setting, by resorting to a model discontinuous in time, continuous and hierarchically reduced in space (c[M(\(\mathbf{M}\))G(*s*)]-dG(*q*) approximation). The selection of the modal distribution and of the space–time discretization is automatically performed via an adaptive procedure based on an *a posteriori* analysis of the global error. The final outcome of this procedure is a table, named *HiMod lookup diagram*, that sets the time partition and, for each time interval, the corresponding 1D finite element mesh together with the associated modal distribution.

### Results

The results of the numerical verification confirm the robustness of the proposed adaptive procedure in terms of accuracy, sensitivity with respect to the goal quantity and the boundary conditions, and the computational saving. Finally, the validation results in the groundwater experimental setting are promising.

### Conclusion

The extension of the HiMod reduction to an unsteady framework represents a crucial step with a view to practical engineering applications. Moreover, the results of the validation phase confirm that HiMod approximation is a viable approach.

### Keywords

Hierarchical model reduction Model adaptation Space–time adaptation Goal-oriented a posteriori error analysis Unsteady advection–diffusion–reaction problems## Background

The extensive use of scientific computing in many fields of science and engineering requires more and more frequently to reach a compromise between modeling reliability and computational efficiency [1]. This goal is currently pursued in the literature via the set up of two complementary methodologies, i.e., *surrogate solutions* and *surrogate models*. Surrogate solutions are generally formalized with a reduction of the size of the finite dimensional solution, as in the reduced basis approach [2], or in the proper orthogonal decomposition (POD) [3] and proper generalized decomposition (PGD) methods [4, 5].

Surrogate models directly replace the reference model via a simplified formulation as with a geometric multiscale modeling [6, 7] or with compressed sensing [8]. This is usually accomplished by taking advantage of specific features of the problem at hand, such as a prevalent direction in the involved dynamics rather than in the geometry of the computational domain. This is exactly the criterion exploited to settle the hierarchical model (HiMod) reduction proposed in [9, 10]. The HiMod technique derives enriched 1D surrogate models to describe phenomena characterized by a leading dynamics albeit in the presence of locally significant transverse features. In particular, the description properties of purely 1D models are enhanced by keeping track of the transverse dynamics in the reduced model. This is achieved by enriching a finite element discretization of the mainstream with a modal representation of the secondary dynamics. This strategy leads to a 1D finite element model with *ad-hoc* coefficients that implicitly include the generally non-constant description of the transverse dynamics. The possibility of locally tuning the modal expansion to match spatial heterogeneities represents one of the main strengths of the HiMod approach [11].

In this paper, we focus on the pointwise HiMod reduction strategy proposed in [12], where the modal tuning is performed on the finite element nodes. For this reason, the pointwise approach turns out to be the most flexible one among the available HiMod procedures [13], being suited to model both localized and widespread dynamics. In particular, with a view to practical applications, we extend the pointwise HiMod formulation to an unsteady setting by resorting to a discretization discontinuous in time. We generalize the cG(*s*)-dG(*q*) formulation in [14–16] to the HiMod setting, by defining a reduced solution that we denote by c[M(\(\mathbf{M}\))G(*s*)]-dG(*q*) approximation. We replace the full model with a solution that is continuous in space and discontinuous in time. It is obtained via a Galerkin spatial approximation that combines finite elements of degree *s* with the modal expansion identified by the index \(\mathbf{M}\), and via discontinuous piecewise polynomials of degree *q* in time.

The selection of the modal distribution as well as of the space–time discretization represents a crucial step of the HiMod reduction. For this reason, we introduce a preprocessing phase to automatically identify the HiMod solution, for fixed values of *s* and *q*. The final outcome of this phase is a table that identifies the time partition and then, for each time interval, selects the corresponding 1D finite element mesh together with the associated modal distribution. We call this table *HiMod lookup diagram*. To this purpose, we resort to an adaptive procedure based on an *a posteriori* analysis of the global (modeling plus space–time discretization) error. We rely upon a goal-oriented setting [17–19], so that the prediction of the c[M(\(\mathbf{M}\))G(*s*)]-dG(*q*) model is driven by a physical quantity of interest.

The estimator for the global error consists of a modeling and of a discretization contribution, which are preserved distinct [11, 20–22]. This represents a crucial property with a view to a global adaptation algorithm. In particular, the modeling estimator is a generalization of the goal-oriented hierarchical *a posteriori* error estimator derived in [11] to a time dependent setting, and it includes the temporal discontinuities of the c[M(\(\mathbf{M}\))G(*s*)]-dG(*q*) scheme. The estimator for the discretization error, in turn, keeps separate the temporal from the spatial contribution [23–26] and it is obtained by including the intrinsic dimensionally hybrid nature of a HiMod approximation into the standard goal-oriented analysis, as in [11].

Although the HiMod lookup diagram is strictly tailored to the problem at hand, we will show that it can be employed to deal with certain variants of such a problem. Thus the computational effort characterizing the preprocessing pays off.

A first validation of the HiMod reduction procedure is also provided in this paper, by dealing with an experimental and modeling study of solute transport in porous media [27].

## The full setting

We introduce the reference parabolic model we aim at reducing via an adaptive space–time model reduction procedure. A standard notation is adopted for the Sobolev spaces associated with the spatial independent variable only, as well as for the space of the functions bounded almost everywhere [28]. Concerning a space–time dependence, we introduce the spaces \(L^2(0, T; W)=\big \{ v:(0, T) \rightarrow W :\int _{0}^{T} \Vert v(t) \Vert _W^2 dt< +\infty \big \}\), \(H^1(0, T; W)=\big \{ v,\ \frac{\partial v}{\partial t} \in L^2(0, T; W) \big \}\), \(C^0([0, T]; W)=\big \{v:[0, T] \rightarrow W\, \text{ continuous } :\forall t \in [0, T], \ \Vert v(t)\Vert _W< +\infty \big \}\), where *W* denotes a generic Hilbert space, with \(\Vert \cdot \Vert _W\) the associated norm [29].

### The problem

*L*is a generic second-order elliptic operator with diffusive contribution given by \(-\nabla \cdot (D \nabla u)\) so that \(D \nabla u \cdot \mathbf{n}\equiv \partial _\nu u\) is the conormal derivative of

*u*, \(\mathbf {n}\) being the unit outward normal vector to \(\partial \Omega \). Concerning the data, we choose the source \(f \in L^2(0, T; L^2(\Omega ))\), the diffusivity tensor \(D=[d_{ij}] \in [L^{\infty }(\Omega )]^{d\times d}\) such that the uniform ellipticity condition holds, the initial datum \(u_0\in L^2(\Omega )\), and the Neumann datum \(g \in L^2(0, T; L^2(\Gamma _N))\). In the next section, further requirements are added on the computational domain as well as on the boundary conditions in view of the HiMod procedure.

*L*, here assumed continuous and coercive. Problem (2) represents the

*full problem*, with

*u*the full solution.

The continuous embedding \(V \hookrightarrow C^0([0, T]; L^2(\Omega ))\) ensures the temporal continuity to the weak solution *u* in (2).

### The computational domain

Problems suited to a HiMod reduction are defined on domains characterized by a prevalent dimension and the leading dynamics is aligned with such a dimension.

*d*-dimensional fiber bundle \(\Omega =\bigcup _{x \in \Omega _{1D}} \{ x \} \times \gamma _x\), where \(\Omega _{1D}\) is the supporting 1D fiber described by the independent variable

*x*and aligned with the dominant dynamics, while \(\gamma _x \subset {\mathbb {R}}^{d-1}\) denotes the transverse fiber that is, in general, a function of

*x*and parallel to the transverse dynamics. For the sake of simplicity, we assume \(\Omega _{1D}\equiv ]x_0,x_1[\) to be rectilinear and we refer to [30] for the more general case of a curved supporting fiber. We partition the boundary \(\partial \Omega \) of \(\Omega \) into three disjoint sets, \(\Gamma _0= \{x_0\}\times \gamma _{x_0}\), \(\Gamma _1= \{x_1\}\times \gamma _{x_1}\) and \(\Gamma _*=\bigcup _{x\in \Omega _{1D}} \partial \gamma _x\), such that \(\partial \Omega = \Gamma _0 \cup \Gamma _1 \cup \Gamma _*\) (see Remark 2 for further details).

Now, we map the domain \(\Omega \) into a reference bundle \(\widehat{\Omega }\), where the computations are easier, free from undetermined constants, and are carried out once and for all. To this aim, for any \(x\in \Omega _{1D}\), we introduce the map \(\psi _{x}:\gamma _x \rightarrow \widehat{\gamma }_{d-1}\) between the generic fiber \(\gamma _x\) and the reference fiber \(\widehat{\gamma }_{d-1}\subset {\mathbb {R}}^{d-1}\). Maps \(\psi _x\) are instrumental to define the global map \(\Psi :\Omega \rightarrow \widehat{\Omega }\), where \(\widehat{\Omega }= \bigcup _{x \in \Omega _{1D}} \{ x \}\times \widehat{\gamma }_{d-1}\) denotes the reference computational domain (see Fig. 1 for an example of map \(\Psi \)). Regularity assumptions are introduced on the maps \(\psi _x\) and \(\Psi \). In particular, we assume \(\psi _x\) to be a \(C^1-\)diffeomorphism, for all \(x\in \Omega _{1D}\), and \(\Psi \) to be differentiable with respect to \(\mathbf{z}\) (essentially to exclude any kinks along \(\Gamma _*\)).

We also demand that the supporting fiber \(\Omega _{1D}\) is preserved by map \(\Psi \), so that the generic point \(\mathbf{z} =(x, \mathbf{y})\in \Omega \) is mapped into \( \widehat{\mathbf{z}}=\Psi ({\mathbf{z}})=(\widehat{x},\widehat{\mathbf{y}})\), with \(\widehat{x}\equiv x\) and \(\widehat{\mathbf{y}} = \psi _x (\mathbf{y})\). Finally, without reducing the generality, we assume \(\Omega _{1D}\) to be the subset of \(\Omega \) with \(\mathbf{y}=\mathbf{0}\), i.e., \(\Omega _{1D}\) exactly coincides with the centerline of \(\Omega \).

###
*Remark 1*

In a 2D setting, we may always select \(\psi _x\) as a linear transformation, so that \(\widehat{y} = \psi _x (y) = y/L(x)\), with \(L(x)=\mathrm {meas} (\gamma _x)\). In 3D a similar choice is possible only for specific configurations, for instance when \(\Omega \) is a cylindrical domain. In this case *L*(*x*) coincides with the diameter of the pipe along the centerline.

## HiMod reduction

The HiMod technique has been proposed in [9, 10] with the idea of exploiting the fiber structure demanded on \(\Omega \), or, likewise, the preferential dynamics of the phenomenon at hand. Currently, three versions of HiMod reduction have been investigated, from both a theoretical and a numerical viewpoint (see [13] for a survey on the different approaches). Independently of the selected technique, the idea is to manage in a different way the dependence of the solution on the leading and on the transverse dynamics. In particular, since HiMod aims at providing enriched 1D models to be associated with the dominant direction, only the dominant dynamic is discretized via a standard finite element scheme, while getting information on the transverse dynamics via a modal expansion. In this section, we consider two of the available HiMod formulations.

### Uniform HiMod reduction

###
*Remark 2*

The analysis below is completely general with respect to the boundary data. So far the robustness of the HiMod reduction has been verified when either homogeneous Dirichlet or homogeneous Neumann boundary conditions are assigned on \(\Gamma _0\), \(\Gamma _1\), \(\Gamma _*\), or when non-homogeneous Dirichlet data are enforced on \(\Gamma _0\) and \(\Gamma _1\). In general, the critical point is the identification of a basis \({\mathcal B}\) matching Robin boundary conditions or non homogeneous data on \(\Gamma _*\). A new strategy with respect to this issue has been recently proposed in [31].

*I*into

*N*subintervals \(I_n=(t_{n-1}, t_n]\) of width \(k_n=t_n-t_{n-1}\), for \(n= 1, \ldots , N\), with \(k=\max _n k_n\), \(t_0\equiv 0\) and \(t_N\equiv T\). This partition induces a subdivision of the cylinder

*Q*into

*N*space–time slabs \(S_n=\Omega \times I_n\), with \(n= 1, \ldots , N\). Notice that partition \(\{ t_i \}_{i=0}^N\) is not necessarily uniform, to match the possible time heterogeneities of the problem. Now, we look for an approximate solution to (2) coinciding, on each space–time slab \(S_n\), with a polynomial of degree at most

*q*in time, with \(q\in {\mathbb {N}}^+\), and with an element of \(V_m\) in space, i.e., a function of the reduced space

*A priori*, functions in \(V_m^N\) may exhibit a discontinuity at each time level, with continuity from the left. As a consequence, a different number of modal functions can be selected on each time interval \(I_n\) (see Fig. 2). This choice leads to replace in (4) the modal index

*m*with the index \(m_n\in {\mathbb {N}}^+\) with \(n=1, \ldots , N\). In such a case we adopt the term space–time

*slabwise uniform*HiMod reduction and we change the notation in (4) into \(V_\mathbf{m}^N\), where \(\mathbf{m}=[m_1, \ldots , m_N]'\in \big [ {\mathbb {N}}^+\big ]^N\) is the vector that collects the number of modes used on each interval \(I_n\), with \(v_\mathbf{m}\) the generic function in \(V_\mathbf{m}^N\).

*V*. This remark allows us to provide a weak formulation for problem (1) equivalent to (2): find \(u\in V\) such that

*w*, \(\zeta \in V\),

*slabwise uniform*HiMod formulation can thus be stated: find \(u_\mathbf{m}\in V_\mathbf{m}^N\) such that, for any \(v_\mathbf{m}\in V_\mathbf{m}^N\),

*x*, after introducing a subdivision, not necessarily uniform, of \(\Omega _{1D}\) into subintervals. The time discontinuity allows to employ a different 1D mesh on each space–time slab (see Fig. 2). In particular, we denote by \(\mathcal T_{h_n}=\{ K_l^n\}_{l=1}^{{\mathcal M}_n}\) the spatial partition associated with \(S_n\) for \(n=1, \ldots , N\), with \(K_l^n=(x_{l-1}^n, x_l^n)\) the generic subinterval of width \(h_l^n=x_l^n-x_{l-1}^n\) for \(l=1, \ldots , {\mathcal M}_n\), with \(h_n=\max _l h_l^n\) and \(x_0^n\equiv x_0\), \(x_{{\mathcal M}_n}^n\equiv x_1\). Then, we furnish each \(S_n\) with the space \(X_{h_n}^{1D, s}\) of the conforming finite elements of degree

*s*associated with \(\mathcal T_{h_n}\), and with \(\dim (X_{h_n}^{1D, s})=N_{h_n}<+\infty \). A standard density hypothesis in \(V_{1D}\) is advanced on each finite element space. Thus, the discrete counterpart of formulation (8) is: find \(u_\mathbf{m}^h\in V_{\mathbf{m}, h}^N\) such that, for any \(v_\mathbf{m}^h\in V_{\mathbf{m}, h}^N\),

^{1}It follows \(V_{\mathbf{m}, h}^N \subset V_\mathbf{m}^N\), i.e., also the discrete HiMod space \(V_{\mathbf{m}, h}^N\) consists of functions continuous in space but discontinuous in time. Notice that, although \(V_{\mathbf{m}, h}^N \not \subset V\), in (9) we can extend definitions (6) and (7) to \(V_{\mathbf{m}, h}^N\) taking advantage of the slabwise splitting.

By generalizing the notation used in [14–16] to denote finite elements that are continuous in space and discontinuous in time, we refer to \(V^N_{\mathbf{m}, h}\) as to the HiMod c[M(\(\mathbf{m}\))G(*s*)]-dG(*q*) space (and, analogously, to (9) as to the c[M(\(\mathbf{m}\))G(*s*)]-dG(*q*) HiMod formulation). We mean that, on each \(S_n\), the full solution is replaced by a reduced solution continuous in space and discontinuous in time, obtained via a Galerkin approximation based on finite elements of degree *s* combined with the modal expansion associated with the multi-index \(\mathbf{m}\) to discretize the space, and piecewise polynomials of degree *q* for the time discretization.

*s*)]-dG(

*q*) HiMod formulation (9).

### HiMod versus PGD

Following the classification proposed in [5], both HiMod reduction and PGD can be categorized as *a priori* approaches since they do not rely on any solution to the problem at hand as, for instance, a POD strategy. Both the methods involve the weak form of the full problem and are based on a classical separation of variables. Nevertheless, while HiMod reduction applies this separation only to the space–time coordinates, PGD involves also problem parameters, such as boundary conditions or material properties, thus increasing the dimension of the space of the unknowns. HiMod applies a different discretization to the variables based on the physics of the problem. The accuracy for each variable may be tuned locally via *a* posteriori arguments. A PGD approach replaces in (3) the known modal function \(\varphi _j( \psi _x( \mathbf{y} ) )\), with \(\mathbf{y}=(y, z)'\), with a term of the form \(F_j(y)G_j(z)\), with \(F_j\) and \(G_j\) unknowns. The two procedures both lead to 1D algebraic systems. In PGD they are intrinsically nonlinear and, in general, of large dimension. PGD requires therefore specific methods for the nonlinearity. In addition, the construction of the PGD approximation via a successive enrichment of an initial solution closely resembles the heuristic approach initially used in HiMod for selecting the number of transverse modes [10]. In this respect, the automatic selection of the HiMod approximation in [11] may represent an important evolution in a PGD setting as well.

### Pointwise HiMod reduction

A fixed number of modal functions on the whole \(\Omega \) may be too restrictive in the presence of spatial heterogeneities. This justifies the formalization of HiMod strategies alternative to the uniform approach, where a different number of modes is adopted in different subdomains of \(\Omega \) (via a piecewise HiMod reduction [10, 11]), rather than in correspondence with each finite element node (thanks to a pointwise HiMod formulation [12]). We focus on the last approach. The numerical verification in [12] identifies the pointwise method as the best-performing one in the presence of either widespread or localized transverse dynamics.

*l*. Space \(V_{\mathbf{m}, h}^N\) is thus replaced by the new space

*pointwise*HiMod reduction and will be denoted by c[M(\(\mathbf{M}\))G(

*s*)]-dG(

*q*) form. It reads exactly as (9), simply by replacing space \(V_{\mathbf{m}, h}^N\) with \(V_{\mathbf{M}, h}^N\). Notice that, since definition (12) strictly depends on the finite element discretization, there does not exist a weak counterpart of the pointwise HiMod formulation.

#### Uniform versus pointwise HiMod reduction: an example

We compare the uniform and the pointwise HiMod approaches on the steady test case 4 in [10], where the transport of oxygen in a wavy channel, representing a Bellhouse oxygenator for extra-corporeal circulation [32], is modeled. This problem is characterized by a widespread dynamics, that is suited to be reduced via both the HiMod techniques.

*u*computed with FreeFem++ [33] on an unstructured uniform mesh of about 50,000 elements and via 2D affine finite elements. The irregular shape of the domain strongly affects the main stream of the flow on the whole domain as highlighted by the bent contour lines.

As far the HiMod reduction, we discretize the dependence of *u* on *x* via affine finite elements after introducing a partition of uniform step \(h =0.1\) on \(\Omega _{1D}\). The transverse dynamics are described with a basis \({\mathcal B}\) of sinusoidal functions. To evaluate the integrals of the modal functions, we resort to Gaussian quadrature formulas based on four quadrature nodes per wavelength, at least. No stabilization scheme is used. We first apply the uniform HiMod approach by resorting to 11 modal functions (see Fig. 3, right). Indeed, as shown in [10], at least 11 modes are required to obtain a sufficiently reliable HiMod approximation.

As second assessment, we build the pointwise HiMod approximation \(u_\mathbf{M}^h\) associated with the modal distribution \(\mathbf{M}\) in Fig. 4, center. By comparing Fig. 3, right with Fig. 4, left we recognize that the two reduced solutions are very similar. In particular, the innermost contour lines associated with \(u_\mathbf{M}^h\) are more accurate, despite the lower number of dof involved by the pointwse approximation (48,400 dof characterize \(u_{11}^h\) to be compared with 28,282 dof for \(u_\mathbf{M}^h\), see the corresponding sparsity pattern in Fig. 4, right).

In accordance with [12], results in Figs. 3 and 4 show the improved modeling capabilities of the pointwise HiMod method vs the uniform approach, for a fixed computational effort. The main issue related to a pointwise formulation is the selection of the nodewise modal distribution. This corroborates the need for an automatic modal selection.

## Adaptive HiMod reduction

Due to its significant impact on practical applications, we consider a goal-oriented framework (see, e.g., [17–19]), so that the predicted reduced model fits a goal functional representing a physical quantity of interest (e.g., mean or pointwise values, fluxes across sections or regions, the energy of the system, the vorticity of a turbulent flow). We denote by *J* the selected functional and we assume it is linear. We aim at approximating, within a prescribed tolerance TOL, the value *J*(*u*), with *u* solution to the full problem (2), via \(J(u_\mathbf{M}^h)\), where \(u_\mathbf{M}^h\) is the reduced solution identified by a preprocessing phase.

At this stage, we use a uniform and sufficiently fine discretization \(\big \{ \big (x_l^n, t_n\big )_{l=1}^{{\mathcal M}_n} \big \}_{n=1}^N\) on \(\Omega _{1D} \times I\) so that we can neglect the error due to the space–time discretization.

### The *a posteriori* modeling error analysis

We generalize the error analysis in [11] to an unsteady setting, to automatically produce the HiMod lookup diagram that provides the number of modes to be switched on at each finite element node and at each time of the space–time partition \(\big \{ \big (x_l^n, t_n\big )_{l=1}^{{\mathcal M}_n} \big \}_{n=1}^N\) (see Fig. 7, left for an example). The *a posteriori* analysis is carried out on the slabwise uniform HiMod formulation, while the pointwise approximation \(u_\mathbf{M}^h\) constitutes the output of the adaptive procedure in the next section.

*J*. Notice that, since \(V_\mathbf{m}^N \not \subset V\),

*J*has to be defined on \(V\cup V_\mathbf{m}^N\) and analogously for \(J_\mathrm{cGdG}\). A null final condition, \(z_\mathbf{m}^{N, +}=0\), allows to get rid of the first integral in (14), whereas boundary contributions may modify the definition of \(J_\mathrm{cGdG}\) when functional

*J*involves a control on the boundary. The assignment of boundary conditions to the dual problem is a crucial issue that is usually tackled via the Lagrange identity.

###
*Remark 3*

*w*, \(\zeta \in V\cup V_\mathbf{m}^N\). This form better fits the dual setting due to the reverse time scale.

*a posteriori*modeling error estimator, we need to introduce the enriched primal and dual slabwise uniform HiMod problems,

###
**Proposition 1**

*Let*\(e_{\mathbf{m}}=u-u_{\mathbf{m}}\in V\cup V_\mathbf{m}^N\)

*and*\(e_{\mathbf{m}^+}=u-u_{\mathbf{m}^+}\in V\cup V_{\mathbf{m}^+}^N\)

*be the modeling error associated with the reduced formulation*(8)

*and*(15),

*respectively for*\(\mathbf{m}, \mathbf{m}^+\in \big [ {\mathbb {N}}^+\big ]^N\)

*and with*\(\mathbf{m}^+>\mathbf{m}\).

*Let us assume that the final dual data*\(z_\mathbf{m}^{N, +}\)

*and*\(z_{\mathbf{m}^+}^{N, +}\)

*are identically equal to zero. Then, if there exists a positive constant*\(\sigma _\mathbf{m}<1\)

*and a modal multi-index*\(\mathbf{M}_0\in \big [ {\mathbb {N}}^+ \big ]^N\)

*such that, for*\(\mathbf{m}^+>\mathbf{m}\ge \mathbf{M}_0\),

*the following two-sided inequality holds*

*with*\(\delta u_{\mathbf{mm}^+}=u_{\mathbf{m}^+} - u_\mathbf{m}\).

### Construction of the HiMod lookup diagram

Estimator \(\eta _{\mathbf{mm}^+}\) is now used to automatically select the pointwise HiMod approximation \(u_\mathbf{M}^h\) for problem (2) that guarantees the desired accuracy TOL on the functional error \(J(u-u_\mathbf{M}^h)\).

*m*and \(m^+\). Then, we resort to the following five-stage procedure:

- (S1)
we compute the discrete uniform reduced primal and dual solutions, \(u_m^h\), \(u_{m^+}^h\), \(z_m\), \(z_{m^+}^h\), on the whole space–time cylinder

*Q*; - (S2)
we evaluate the modeling estimator \(\eta _{mm^+}^n=\eta _{mm^+}\big |_{S_n}\) localized to each space–time slab \(S_n\);

- (S3)
we apply the adaptive procedure outlined in Fig. 5 on each slab \(S_n\) to predict the corresponding nodewise modal distribution \(\mathbf{M}_n\), i.e., to build the HiMod lookup diagram (see below for all the details);

- (S4)
we compute the discrete pointwise reduced primal and dual solutions, \(u_\mathbf{M}^h\), \(u_{\mathbf{M}^+}^h\), \(z_\mathbf{M}\), \(z_{\mathbf{M}^+}^h\), associated with the HiMod diagram yielded at stage (S3);

- (S5)
we evaluate the global modeling error estimator \(\eta _{\mathbf{MM}^+}\) by employing the pointwise solutions identified at stage (S4). Then, if the global tolerance is met, i.e., \(\eta _{\mathbf{MM}^+}\le \) TOL, the procedure stops, providing the HiMod lookup diagram in (S3) as final outcome. Vice versa, if \(\eta _{\mathbf{MM}^+}>\) TOL, we come back to (S2).

*s*)]-dG(

*q*) scheme. More sophisticated approaches such as checkpointing [37] may be adopted to further reduce the computational costs. The modeling estimator can obviously be evaluated in correspondence with any HiMod approximation [uniform as in (S2), slabwise uniform as in (19), pointwise as in (S5)]. Indeed, via the first definition in (19), it suffices to properly evaluate the bilinear form (6). Concerning the localization of the estimator to \(S_n\) at stage (S2), by exploiting again the first definition in (19), we have

- (S3_1)
we assign a number of modes equal to

*m*to each node and to each subinterval of partition \({\mathcal T}_{h_n}\); - (S3_2)
we evaluate the estimator \(\eta _{mm^+}^{n, l}=\eta ^n_{mm^+}\big |_{K^n_l}\) localized to each interval \(K_l^n\) of \({\mathcal T}_{h_n}\), for \(l=1, \ldots , {\mathcal M}_n\);

- (S3_3)
we invoke an equidistribution criterion on the slabs as well as on the subintervals \(K_l^n\). If \(\eta _{mm^+}^{n,l}>\) TOL \(\, \delta _\mathrm{1M}/(N {\mathcal M}_n)\), we increase by one the modal index associated with \(K_l^n\) (model refinement); if \(\eta _{mm^+}^{n,l}<\) TOL \(\, \delta _\mathrm{2M}/(N {\mathcal M}_n)\), we decrease by one such an index (model coarsening); otherwise, we preserve the current modal index;

- (S3_4)
we update the number of modes associated with each finite element node by assigning to the generic node \(x_l^n\), for \(l=1, \ldots , {\mathcal M}_n -1\), a number of modes equal to \(m_{n,l}=\min (m_{K_l^n},m_{K_{l+1}^n})\), with \(m_{K_l^n}\) the number of modes assigned on the interval \(K^n_l\). In particular, to avoid an abrupt variation of modes on consecutive nodes, the actual value \(m_{n,l}^*\) associated with \(x_l^n\) coincides with \(\max (0.5\, m_{n, l-1}+ 0.5\, m_{n,l +1}-3, m_{n,l})\). The endpoints of \(\Omega _{1D}\) are updated separately as \(m_{n,0}=m_{K_1^n}\) and \(m_{n, {\mathcal M}_n}=m_{K_{{\mathcal M}_n}^n}\) if Dirichlet boundary conditions are not imposed on \(\Gamma _0\) and on \(\Gamma _1\), respectively. The assignment of the modal indices \(m_{n,l}\) predicts the modal multi-index \(\mathbf{M}_n=[m_{n, 1}, \ldots , m_{n, N_{h_n}}]'\) for the slab \(S_n\).

Of course, steps (S3_1)–(S3_4) are replayed on the enriched modal index \(m^+\), with a view to the evaluation of the modeling error estimator at stage (S5).

### Numerical verification

The numerical verification is carried out in a 2D setting. Moreover, to select the discrete HiMod space, we choose \(q=0\) and \(s=1\), i.e., we use linear finite elements to discretize the leading dynamics and functions piecewise constant in time. It can be checked that the adopted time discretization is equivalent to a modified backward Euler scheme [15].

### Reliability of the adaptive HiMod reduction procedure

*u*approximated with FreeFem++ via a standard 2D cG(1)-dG(0) scheme on a uniform unstructured mesh of 10252 triangles. As expected, the convective field acts on the purely diffusive phenomenon by horizontally bending the contour lines. From a modeling viewpoint, we are simulating, for instance, the process of convection and diffusion of a pollutant emitted by a chimney localized at \(\mathcal D\), in the presence of a moderate horizontal wind. In this context, the full solution

*u*(

*t*) represents the pollutant concentration in the domain \(\Omega \) at a certain time \(t\in I\)

We aim at controlling the mean value of the full solution on the whole \(\Omega \) at the final time \(T=1\), i.e., we select the goal functional *J* as \(J_{\mathrm{mean}, T}(\zeta )=[ \mathrm {meas}(\Omega ) ]^{-1}\int _{\Omega } \zeta (x, y, 1)\, d\Omega. \) The choice of a localized functional is challenging with a view to the modeling adaptive procedure. The dual problem is characterized by the differential operator \(L^*z=-\Delta z - \mathbf{c} \cdot \nabla z\), with source term given by the density function \({\widetilde{j}}(x, y, t)=[ \mathrm {meas}(\Omega ) ]^{-1} \delta _T\) associated with \(J_{\mathrm{mean}, T}\), where \(\delta _T\) denotes the Dirac distribution associated with the final time. On \(\Gamma _N\) a homogeneous Robin boundary condition is imposed, while a homogeneous Dirichlet data is assigned on \(\partial \Omega \backslash \Gamma _N\). A null final value \(z_\mathbf{m}^{N, +}\) is selected.

Both the primal and dual problems are computed by discretizing the supporting fiber \((0, 3)\times \{ 0.5\}\) via a uniform partition of size \(h=0.15\) and the time window with a constant step \(k=0.1\). The modal basis \({\mathcal B}\) consists of sinusoidal functions.

*m*and \(m^+\) are set to 1 and 3, respectively.

The adaptive algorithm converges after 21 iterations and provides as output the HiMod lookup diagram in Fig. 7, left. The diagram coincides with the space–time rectangle \(\Omega _{1D}\times I\), where \(\Omega _{1D}\) and *I* exhibit the corresponding partition of uniform size *h* and *k*, respectively. A certain number of modal functions is associated with each cell \(K_l^n\times k\) for \(l=1, \ldots , {\mathcal M}_n\) and \(n=1, \ldots , N\). Thus, by resorting to the procedure in Fig. 5, (S3_4) it is possible to build the HiMod pointwise approximation \(u_{\mathbf{M}_n}^h\) for \(n=1, \ldots , N\), i.e., the reduced solution \(u_\mathbf{M}^h\) that guarantees the estimate \(|J_{\mathrm{mean}, T}(u)-J_{\mathrm{mean}, T}(u_\mathbf{M}^h)|<\)
TOL.

The HiMod diagram in Fig. 7, left shows that few modes are demanded on the whole space–time domain, except for the two last time intervals, where a larger number of modes is switched on in correspondence with the localized source and the downstream region. More quantitative information are provided by the plot in Fig. 7, center of the number of modes associated with node \(x=1.5\) as a function of time. Only three modes are used on the whole time interval except for the subintervals \(I_{N-1}\) and \(I_N\), when five and 13 sine functions are required, respectively. The modal distribution predicted by the lookup diagram is completely coherent with a goal-oriented approach. Since we are interested in the mean value of the solution only at the final time, it is reasonable to expect a reliable approximation of the full solution only in correspondence with the last time intervals. This trend is confirmed by the corresponding pointwise HiMod approximation which reproduces more closely the full one during the last times of the simulation (compare Fig. 6, left and right).

In Fig. 7, right we show the value of \(\eta _{\mathbf{MM}^+}\) on the same space–time structure of the HiMod diagram. The boxes associated with the largest values of the estimator identify a pattern similar to the one in Fig. 7, left.

### Sensitivity of the adaptive HiMod reduction procedure to the goal-functional

*T*in Fig. 6, left-bottom. The mean value is controlled in an area where the full solution is extremely smooth so that a single mode is enough.

### Robustness of the HiMod lookup diagram

The computational effort demanded by the adaptive procedure is justified by the possibility to employ the lookup diagram associated with a specific setting to hierarchically reduce a variant of this. Figure 9 performs this check on three variants of the test-case in Fig. 6. In more detail, we adopt the HiMod lookup diagram in Fig. 10, top-left to build the HiMod approximation for three new advection-diffusion problems, characterized by a different choice of the source term, namely, \(f_1\equiv 10 \chi _{\mathcal D_1}\) with \(\mathcal D_1=\{ (x, y):(x-1.5)^2 + 4(y-0.45)^2\le 0.01\}\) (Fig. 9, top), \(f_2\equiv 10 \chi _{\mathcal D_2}\) with \(\mathcal D_2=\{ (x, y):(x-1.7)^2 + 4(y-0.25)^2\le 0.01\}\) (Fig. 9, middle) and \(f_3\equiv 10 \chi _{\mathcal D_3}\) with \(\mathcal D_3=\{ (x, y):(x-1.5)^2 + 4(y-0.25)^2\le 0.01\}\cup \{ (x, y):(x-1.5)^2 + 4(y-0.65)^2\le 0.01\}\) (Fig. 9, bottom), respectively. Figure 9, left shows the HiMod approximations thus obtained. To check the reliability of the obtained solutions, we apply the HiMod adaptive algorithm directly to the three new problems. The corresponding lookup diagrams are gathered in Fig. 10, whereas the associated HiMod approximations are collected in Fig. 9, right. The matching between the contour plots in the two columns of Fig. 9 is substantial, in particular for the two-source test case which represents the most sigificant variant with respect to reference configuration.

## Combined HiMod reduction and space–time adaptation

Goal of this section is to enrich the information provided by the HiMod lookup diagram by predicting also the space–time partition of \(\Omega _{1D} \times I\). Consequently, we remove any assumption on the finite element discretization as well as on the time partition \(\{I_n\}\). In practice, we expect to replace the diagram in Fig. 7, left with a new diagram characterized by a non uniform horizontal (spatial) and vertical (temporal) spacing.

### The *a posteriori* estimator for the global error

With a view to a global adaptation, following, e.g., [11, 20–22], we derive an *a posteriori* estimator for the global error \({\mathcal E}_\mathbf{m}^h=e_\mathbf{m}+e_\mathbf{m}^h\), where the contributions of the modeling (\(e_\mathbf{m}=u-u_\mathbf{m}\)) and of the discretization (\(e_\mathbf{m}^h=u_\mathbf{m}-u_\mathbf{m}^h\)) errors remain distinct. In particular, since we are interested also in an adaptive selection of the space and time step size, we expect that the estimator for the discretization error consists of a spatial contribution separate from the temporal one [23–26].

As for the adaptive HiMod reduction, we carry out the *a posteriori* analysis in a slabwise uniform HiMod setting. The following statement plays a crucial role in the definition of the global error estimator.

###
**Proposition 2**

*We assume that saturation assumption*(17)

*holds*,

*and we choose*\(z_\mathbf{m}^{N, +}=z_{\mathbf{m}^+}^{N, +}=0\).

*Then, for any*\(\mathbf{m}\), \(\mathbf{m}^+\in \big [ {\mathbb {N}}^+ \big ]^N\),

*with*\(\mathbf{m}^+>\mathbf{m}\ge \mathbf{M}_0\) and \(\mathbf{M}_0\)

*defined as in Proposition*1,

*it turns out that*

*Moreover, if there exists a constant*\(\lambda \)

*with*\(0<\lambda <1\),

*such that*

*it additionally holds that*

*a posteriori*error estimator for the global error \({\mathcal E}_\mathbf{m}^h\). As a consequence, inequalities (21) and (23) state the reliability and the efficiency of such an estimator. The first term of \(\eta _{\mathbf{mm}^+}^h\) exactly coincides with the modeling error estimator in (19), while the second contribution takes into account the error associated with both the spatial and the temporal discretizations. The main effort of this section will be to explicitly estimate this term, with the additional requirement of distinguishing the space from the time contribution. As in [11], we modify the standard goal-oriented analysis to tackle the intrinsic dimensionally hybrid nature of a HiMod reduced formulation.

Concerning hypothesis (22), it essentially coincides with a sufficient grid resolution requirement since establishing a ratio between the modeling and the discretization errors. With a view to estimate \(|J(e_\mathbf{m}^h)|\), we preliminarily prove the following Galerkin orthogonality property for the discretization error \(e_\mathbf{m}^h\).

###
**Lemma 1**

*For any*\(v_\mathbf{m}^h\in V_{\mathbf{m}, h}^N\),

*the following relation holds*

###
*Proof*

*w*, \(\zeta \in V\cup V_\mathbf{m}^h\). Now, since \(V_{\mathbf{m}, h}^N \big |_{S_n}\subset V_\mathbf{m}^N \big |_{S_n}\), we subtract (27) from (26) after identifying \(v_\mathbf{m}\) with \(v_\mathbf{m}^h\), to get the orthogonality relation

*L*in (1) to the prism \(S_{R_l^n}\) and \([\partial _\nu u_\mathbf{m}^h]\) is the jump of the conormal derivative of \(u_\mathbf{m}^h\) across an edge of the skeleton \(\mathcal {E}_h^n=\{ \zeta _\tau ^n \}_{\tau =1}^{{\mathcal M}_n-1}\). We consider now the temporal residual associated with \(u_\mathbf{m}^h\) and with the time level \(t_n\)

*c*constant in time, so that

*K*denotes a generic interval of \(\Omega _{1D}\), \(\widetilde{K}\) is the associated patch of elements, and with \({\mathcal C}_1\) and \({\mathcal C}_2\) constants depending on the relative size of the elements constituting \(\widetilde{K}\) [38].

We are now ready to prove the following result:

###
**Proposition 3**

*Let*\(\Omega \subset {\mathbb {R}}^2\).

*Let us assume that the approximation*\(u_\mathbf{m}^{h, 0, -}\)

*of the initial datum coincide with the*\(L^2\)-

*projection*\({\mathcal P}_{I_1}(u_\mathbf{m}^{0, -})\)

*of*\(u_\mathbf{m}^{0, -}\)

*onto the space*\(V_{\mathbf{m}, h}^N\big |_{I_1}\).

*Moreover, we choose*\(z_\mathbf{m}^{N, +}=0\).

*Then, the following estimate for the functional error*\(|J(e_\mathbf{m}^h)|\)

*holds*

*with*\({\mathcal C}\)

*a constant depending on the interpolation constants in*(34) and (35),

*on*

*q*

*and on*\(\max _n m_n\),

*where the residuals are defined by*

*with*\(\overline{r}_{R_l^n}=T_n r_{R_l^n}\), \(\overline{j}_{R_l^n}=T_n j_{R_l^n}\), \(h_l^n\)

*and*\(k_n\)

*the length of the generic subinterval*\(K_l^n\)

*and*\(I_n\),

*respectively for*\(l= 1, \ldots , {\mathcal M}_n\)

*and*\(n=1, \ldots , N\),

*and with*\(\delta _{1,n}\)

*the Kronecker symbol associated with the first slab*\(S_1\),

*while the weights are given by*

*with*

*the patch associated with the subinterval*\(K^n_l\), \(L(x)=\mathrm {meas} (\gamma _x)\), \(\widetilde{z}_{j,r}^{\, n}\)

*and*\(\widetilde{z}_{j,r}^{\, n, h}\)

*the modal coefficients associated with the dual solution*\(z_{\mathbf{m}}\)

*and with the corresponding discretization*\(z_\mathbf{m}^{h}\),

*respectively*.

###
*Proof*

*x*-dependent modal coefficients since it is one-dimensional. Notice that, since we estimate slabwise the terms (I)–(IV), all the functions in \(V_\mathbf{m}^N\) and \(V_{\mathbf{m}, h}^N\) have to be meant restricted to \(I_n\), for each \(n=1, \ldots , N\). Function \(v_\mathbf{m}^h\) is extended to zero outside \(I_n\) when considered as a function of \(V_{\mathbf{m}, h}^N\).

*q*and \(m_n\). From now on, \(\mathcal C\) denotes a constant whose value may change from line to line. Term \(\mathrm{(II)}\) can be bounded analogously to contribution \(\mathrm{(I)}\), by restricting the computations on the lateral surface \(L_{R_l^n}\) of \(S_{R_l^n}\). This yields

*q*and \(m_n\). We focus now on term \(\mathrm{(III)}\) and, first of all, we apply again splitting (39):

*x*- and

*y*-direction in the weights is split.

Some computational remarks on estimator \(\eta ^h\) are now in order.

To make computable the weights, we replace the dual solution \(z_\mathbf{m}\) with a computable discrete counterpart \(z_\mathbf{m}^{*, h}\). A possibility is to resort to the discrete enriched dual solution \(z_{\mathbf{m}^+}^h\). Nevertheless, since the temporal weights involve the time derivative of \(z_\mathbf{m}\), we resort to a temporal recovery procedure yielding an approximation \(z_\mathbf{m}^{*, h}\) that is at least linear in time. In particular, we follow the approach in [25, 26]. The dependence of the weights on the dual discretization error rather than on the dual solution is optimal in terms of convergence. Moreover, the time averaged residuals \(\overline{r}_{R_l^n}\) and \(\overline{j}_{R_l^n}\) make the estimator more reliable since \(\Vert \overline{w} \Vert _{L^2(I_n)}\le \Vert w \Vert _{L^2(I_n)}\) as well as \(\Vert w - \overline{w} \Vert _{L^2(I_n)}\le \Vert w \Vert _{L^2(I_n)}\), for any function \(w\in L^2(I_n)\). An extra care has to be devoted to the computation of the temporal residual \({\mathcal J}_{n-1}\) that combines solutions associated with two different meshes. We use an interpolation operator from the degrees of freedom of \({\mathcal T}_{h_n}\) onto the ones associated with \({\mathcal T}_{h_{n+1}}\). Finally, the analysis in Proposition 3 may be generalized to a 3D framework provided that map \(\psi _x\) is properly chosen. In particular, the orthonormality of basis \(\mathcal B\) may be exploited to derive estimates (40) and (43) only if \({\mathcal D}^{-1}(x, \psi _x^{-1}(\widehat{\mathbf{y}}))\) does not depend on \(\widehat{\mathbf{y}}\). This has to be explicitly demanded in a 3D setting while it always holds in a 2D framework.

### Building the space–time HiMod lookup diagram

Goal of this section is to keep the global functional error below a fixed tolerance TOL via an automatic selection of the modal distribution and now also of the space–time mesh \(\big \{\big (K_l^n, I_n\big )_{l=1}^{{\mathcal M}_n}\big \}_{n=1}^N\).

After a preliminary check on the accuracy of the global error estimator associated with the initial uniform modal distribution and the initial uniform space–time grid, model adaptation takes place till the accuracy TOL_MODEL is met by estimator \(\eta _{\mathbf{MM}^+}\). Then, we check if model adaptation suffices to provide the global tolerance TOL without any space–time mesh adaptation. If not the module ADMESH is activated. In particular, we apply the spatial rather than temporal adaptation depending on which of the estimators \(\eta _S^h\), \(\eta _T^h\) is the greatest one. When \(\eta ^h<\) TOL_MESH, we come back to the initial check on the global accuracy.

A maximum number of iterations ensures the end of the whole adaptive procedure. We remark that each time the space–time partition is updated, a projection of the primal and dual solutions involved in the evaluation of the error estimator is demanded. As for the choice of the tolerances, we resort to a convex combination of the two tolerances, by selecting TOL_MODEL \(=\theta \) TOL and TOL_MESH \(=(1-\theta )\) TOL, with \(0\le \theta \le 1\) [11]. The parameter \(\theta \) settles a relation between model and discretization error, in accordance with requirement (22).

Finally, we refer to the outcome of the whole adaptive algorithm as to the space–time HiMod lookup diagram. Some instances of this table are provided in the next section.

### Numerical verification

In this section we assess the reliability of the global adaptive procedure.

### Reliability of the space–time adaptive HiMod reduction procedure

*m*and \(m^+\), and for the initial space–time mesh. Then, we set \(\theta =0.5\).

The adaptive procedure converges after 50 iterations, with 23 model iterations followed by nine and eight adaptations of the spatial and of the temporal mesh, respectively and by ten additional model adaptations. The final outcome of the adaptive procedure is the HiMod lookup diagram in Fig. 13, top-left. A comparison between this table and the one in Fig. 7, left shows a similar trend for the modes, i.e., a gradual increment of the number of modes as we approach the final time and in correspondence with the source location and the downstream areas. Nevertheless, the combination of model with mesh adaptation reduces from 3 to 1 the number of modes used in the first phase of the test case (compare Fig. 7, center with Fig. 14, left). Concerning the spatial adaptation, a coarse mesh consisting of less than 20 subintervals and refined around \(x=1.5\) is predicted for the first time intervals. Then, this number increases with an abrupt variation in the last time interval when it reaches its maximum (see Fig. 14, center). The monotone trend characterizing the model and the spatial mesh adaptation is qualitatively the same, exhibiting a refinement of the modes and of the finite element partition confined to the last time intervals, in accordance with the goal quantity.

*T*. A strong refinement of the initial grid is recurrent in mesh adaptation and here it likely balances the initial rough modal and spatial discretizations. The second refinement occurs when the control of the mean value becomes more relevant. At time \(t=0.8\), both the modal discretization and the space–time mesh are considerably refined to ensure the imposed tolerance. Probably, a complex interplay among the three discretizations takes place during the last time intervals, so that the severe demand on the time step can be then relaxed before reaching the final time.

Figure 13 gathers the distribution of the three error estimators on the space–time lookup diagram. The choice made for the tolerances leads to values of the same order of magnitude for \(\eta _{\mathbf{MM}^+}\) and \(\eta _S^h\), while the error estimator associated with the time discretization assumes larger values.

### Assignment of Neumann boundary conditions

We challenge the whole adaptive procedure by modifying the boundary conditions in the previous test case. We assign a homogeneous Neumann condition on the whole boundary, except for the edge \(\Gamma _D=\{ (0, y):0\le y\le 1 \}\) where we preserve the homogeneous Dirichlet data. The new condition along the horizontal sides leads to select a new modal basis. After identifying the reference fiber \(\widehat{\gamma }_1\) with the interval [0, 1], we choose \({\mathcal B}=\{\varphi _j(\widehat{y})= \sqrt{2}\cos (\pi j \widehat{y})\}_{j\in {\mathbb {N}}}\).

Figure 16, left shows the cG(1)-dG(0) full solution at four different times, computed with FreeFem++ on a uniform unstructured mesh of 10,252 elements. In particular, the new flux-free configuration erases the horizontal dynamics in Fig. 6, pushing the pollutant to contaminate also the northeast and the southeast areas. If we set the global adaptive procedure to control \(J_{\mathrm{mean}, T}\), we do not expect much benefit from the modal basis since all the cosine functions have a null mean except \(\varphi _0\). Figures 17, top and 18, top-left collect some results of the global adaptive procedure for TOL_MODEL
\(=\)
TOL_MESH
\(=5\times 10^{-3}\). The adaptive algorithm stops after ten iterations. No model adaptation is performed and only function \(\varphi _0\) is switched on. On the contrary, both the spatial and the temporal meshes are adapted via seven and three iterations, respectively. The cardinality of the finite element mesh reaches a minimum in the middle of the interval *I*, while, after an initial refinement, the time step increases to the initial value 0.1. Overall, the modal-space–time discretization is coarse as shown by the HiMod lookup diagram. The correspoding c[M(\(\mathbf{M}\))G(1)]-dG(0) HiMod solution is provided in Fig. 18, bottom for two different times. It is not surprising that \(u_\mathbf{M}^h\) looses the essential features of the full solution due to the deficiency of the reduced model. Smaller values of TOL, of course, do not modify this trend.

### Robustness of the space–time HiMod lookup diagram

### Computational saving

Goal of this section is to verify the benefits due to the HiMod adaptation procedure in terms of CPU times^{2} with respect to a full and a uniform HiMod approximation. For the sake of simplicity, we consider a steady problem. We solve on the rectangular domain \(\Omega = (0,2 \pi ) \times (0,\pi )\) the advection-diffusion problem \(- \Delta u + \mathbf{c} \cdot \nabla u = f\), with \(\mathbf{c}=(10, 0)'\), by assigning homogeneous Dirichlet data on \(\partial \Omega \backslash \Gamma _N\), with \(\Gamma _N=\{ (2\pi , y) :0\le y \le \pi \}\), and a homogeneous Neumann data on \(\Gamma _N\). Then, we choose the source term such that the exact solution coincides with \(u(x, y)=\sin y \sin \big ( 0.01 y (x^3 - 12 \pi ^2 x )\big )\) (see Fig. 21, top-left). We first investigate the advantages due to a uniform HiMod reduction with respect to a standard 2D finite element approximation. We fix a number of dof around 190 and we compute the \(L^2(\Omega )\)-norm of the error associated with the full approximation and with the uniform HiMod solution based on 17 modes and a uniform subdivision of the supporting fiber into 11 subintervals (see Fig. 21, top-right). As Table 1 shows, we gain an order of accuracy via the HiMod reduction. A comparison in terms of CPU time is not reasonable in such a case since the HiMod code is not yet optimized. By resorting to a modal adaptivity and for a comparable number of dof, we obtain a HiMod approximation more accurate with respect to the uniform one (compare the contour plots in Fig. 21, top-right and bottom-left and the values in Table 1) with a similar CPU time (in s). The modal distribution yielded by the adaptive procedure is shown in Fig. 22, left. A number of modes less than 17 is demanded on the whole domain except for the last three nodes. Concerning the CPU time, we quantify only the seconds demanded to build the HiMod approximation from the predicted modal distribution, since we have verified the robustness of the HiMod diagrams.

Computational saving check: comparison between full and HiMod approximations for about the same number of dof

Full | Uniform HiMod | Model adaptation | |||||
---|---|---|---|---|---|---|---|

dof | Error | dof | Error | CPU time | dof | Error | CPU time |

190 | 0.506 | 187 | 0.084 | 0.453 | 193 | 0.045 | 0.407 |

Computational saving check: comparison between full and HiMod approximations for about the same number of dof

Full | Model \(+\) mesh adaptation | |||
---|---|---|---|---|

dof | Error | dof | Error | CPU time |

630 | 0.148 | 622 | 0.027 | 1.321 |

989 | 0.091 | 966 | 0.018 | 2.035 |

## Validation of the HiMod reduction

This is a first attempt of validation for the HiMod reduction procedure. For this purpose, we focus on the experimental and modeling analysis provided in [27] dealing with a reactive transport in homogeneous porous media.

We consider the experimental setting outlined in Fig. 23. It consists of a rectangular laboratory flow cell of dimension 2.5 dm \(\times 1\) dm \(\times 0.08\) dm along the *x*-, *y*- and *z*-direction, respectively. The cell is filled with a porous media with measured porosity equal to 0.375 and it is initially saturated with an aqueous solution. Segment \(\Gamma _\mathrm{inlet}=\{ (0, y, z):0.5\le y \le 1, 0\le z \le 0.08\}\) coincides with an inlet boundary, where a constant concentration, modeling the injection of a reactive component, is assigned. Simultaneously, a flow rate of 12 ml/h is set at the outlet \(\Gamma _\mathrm{outlet}=\{ (2.5, y, z):0\le y \le 1, 0\le z \le 0.08\}\), resulting in an average water velocity of about 0.404 dm/h at the equilibrium. We remark that the set-up of the experiment is designed to have a pseudo-1D flow, parallel to the *x*-axis. Finally, ten sampling ports are located in the cell, to collect measurements of the reactive fluid concentration. Sampling is performed four times during each experiment. The concentration measurements represent the data we aim at matching via a HiMod reduced modeling in the same spirit of the analysis in [27]. The reactive transport experiment is conducted for 60 h, though a stationary state is reached already after 15 h from the beginning of the experiment, so that we restrict the time window of investigation to (0, 30).

*z*-axis, we can simulate the experiment in an effective way as a two-dimensional flow. In particular, we adopt the unsteady equation

Figure 24, left shows the full solution computed with FreeFem++ on a uniform unstructured mesh of 13,078 triangles at \(t=5\), 11, 15, 19 h. The reactive fluid gradually spreads into the flow cell.

As last test, we assess the reliability of the modeling adaptive procedure in a validation context. We aim at evaluating the reactive fluid concentration at \(\widetilde{t}=15\) h via the c[M(\(\mathbf{M}\))G(1)]-dG(0) HiMod solution predicted by the modeling adaptive procedure. We consequently choose functional *J* as \(J_{15}(\zeta )=[ \mathrm {meas}(\Omega ) ]^{-1}\int _{\Omega } \zeta (x, y, 15)\, d\Omega \). The expectation is to obtain a value for the concentration similar to the one provided by \(u_{20}^h\) and not so far from the experimental data. We set the adaptive algorithm with TOL
\(=10^{-3}\), \(m=1\), \(m^+=3\). Concerning the space–time discretization, we fix a uniform space–time subdivision of \(\Omega _{1D}\times I\), with \(h=0.05\) and \(k=0.5\). Finally, we reduce the time window to (0, 15) due to the stationary regime of the flow in the interval (15, 30).

*t*approaches \(\widetilde{t}\).

Finally, we examine the concentration values predicted by the adapted HiMod solution at \(\widetilde{t}=15\) h in correspondence with the eight ports in Fig. 25 (see the square symbols). It is evident the good matching of the simulated concentrations between \(u_{20}^h\) and \(u_\mathbf{M}^h\), with a slight different prediction at ports C3 and D3.

## Conclusions and perspectives

We have successfully extended the pointwise HiMod approach to an unsteady setting, by formalizing the so-called c[M(**M**)G(s)]-dG(q) HiMod reduction procedure. The goal-oriented *a posteriori* error analysis has allowed us to devise an automatic algorithm to select the reduced model, that guarantees the desired accuracy on the functional of interest. The results yielded by the global adaptive procedure are very satisfying, despite the complex interplay among the three adaptations. The sensitivity of the predicted HiMod reduced model with respect to the goal quantity and the assigned boundary conditions has been correctly validated. We have also verified the robustness of the HiMod lookup diagrams, by showing that, although strictly tailored to the problem at hand, they can be employed to deal with certain variants of such a problem. The computational advantages guaranteed by a HiMod reduction have been checked as well. Finally, the preliminary validation results in the last section are absolutely promising with a view to an effective application of HiMod to practical problems.

Prospective extensions of HiMod reduction include the approximation of nonlinear as well as 3D problems. This will be a crucial effort with a view to our last goal, i.e., to use HiMod reduction for the simulation of the blood flow in the arterial system.

To simplify notations, with the super-index *h* we understand both the space and time discretizations.

All the experiments have been performed using Matlab 2011a 64-bit on a Lenovo ThinkPad T430 equipped with a Intel Core i5 3320M 2x 2.6–3.3 GHz processor and 4 GB of RAM.

## Declarations

### Authors’ contributions

Both the authors contributed to the development of the theory, of the code and to the analysis of the numerical results. Both authors read and approved the final manuscript.

### Acknowledgements

The authors thank Giovanni Porta for the advices on the validation test case. Moreover, the first author gratefully acknowledges the NSF project DMS 1419060 “Hierarchical Model Reduction Techniques for Incompressible Fluid-Dynamics and Fluid-Structure Interaction Problems” (P.I. Alessandro Veneziani) and the MIUR-PRIN 2010/2011 project “Innovative Methods for Water Resources under Hydro-Climatic Uncertainty Scenarios” for the financial support. The second author is partially supported by the ERC Advanced Grant 2013 No. 321186, “ReaDi, Reaction–Diffusion Equations, Propagation and Modeling” and by the project ERC Advanced Grant 2013 No. 339958, “Complex Patterns for Strongly Interacting Dynamical Systems, COMPAT”.

### Competing interests

The authors declare that they have no competing interests.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Lorenz B, Biros G, Ghattas O, Heinkenschloss M, Keyes D, Mallick B, Tenorio L, van Bloemen Waanders B, Willcox K, Marzouk Y, editors. Large-scale inverse problems and quantification of uncertainty. Vol. 712. Chichester: Wiley; 2011.
- Maday Y, Ronquist EM. A reduced-basis element method. C R Acad Sci Paris Ser I. 2002;335:195–200.
- Kunisch K, Volkwein S. Galerkin proper orthogonal decomposition methods for parabolic problems. Numer Math. 2001;148:117–48.MathSciNetView ArticleGoogle Scholar
- Chinesta F, Ammar A, Cueto E. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Arch Comput Methods Eng. 2010;17(4):327–50.MATHMathSciNetView ArticleGoogle Scholar
- Chinesta F, Keunings R, Leygue A. The proper generalized decomposition for advanced numerical simulations. A primer. Springerbriefs in applied sciences and technology. Cham: Springer; 2014.
- Blanco PJ, Leiva JS, Feijóo RA, Buscaglia GC. Black-box decomposition approach for computational hemodynamics: one-dimensional models. Comput Methods Appl Mech Eng. 2011;200(13–16):1389–405.MATHView ArticleGoogle Scholar
- Formaggia L, Quarteroni A, Veneziani A, editors. Cardiovascular mathematics, modelling and simulation of the circulatory system. Modelling simulation and applications. Vol. 1. Milano: Springer; 2009
- Bruckstein AM, Donoho DL, Elad M. From sparse solutions of systems of equations to sparse modeling of signal and images. SIAM Rev. 2009;51(1):34–81.MATHMathSciNetView ArticleGoogle Scholar
- Ern A, Perotto S, Veneziani A. Hierarchical model reduction for advection–diffusion–reaction problems. In: Kunisch K, Of G, Steinbach O, editors. Numerical mathematics and advanced applications. Berlin: Springer; 2008. p. 703–10.View ArticleGoogle Scholar
- Perotto S, Ern A, Veneziani A. Hierarchical local model reduction for elliptic problems: a domain decomposition approach. Multiscale Model Simul. 2010;8(4):1102–27.MATHMathSciNetView ArticleGoogle Scholar
- Perotto S, Veneziani A. Coupled model and grid adaptivity in hierarchical reduction of elliptic problems. J Sci Comput. 2014;60(3):505–36.MATHMathSciNetView ArticleGoogle Scholar
- Perotto S, Zilio A. Hierarchical model reduction: three different approaches. In: Cangiani A, Davidchack R, Georgoulis E, Gorban A, Levesley J, Tretyakov M, editors. Numerical mathematics and advanced applications. Berlin: Springer; 2013. p. 851–9.Google Scholar
- Perotto S. A survey of hierarchical model (Hi-Mod) reduction methods for elliptic problems. In: Idelsohn SR, editor. Numerical simulations of coupled problems in engineering, vol. 33., Computational methods in applied sciencesCham: Springer; 2014. p. 217–41.View ArticleGoogle Scholar
- Eriksson K, Johnson C, Thomée V. Time discretization of parabolic problems by the discontinuous Galerkin method. RAIRO Model Math Anal Numer. 1985;19:611–43.MATHMathSciNetGoogle Scholar
- Thomée V. Galerkin finite element methods for parabolic problems. 2nd ed. Springer series in computational mathematics. Vol. 25. Berlin: Springer; 2006
- Eriksson K, Estep D, Hansbo P, Johnson C. Computational differential equations. Cambridge: Cambridge University Press; 1996.MATHGoogle Scholar
- Becker R, Rannacher R. An optimal control approach to a posteriori error estimation in finite element methods. Acta Numer. 2001;10:1–102.MATHMathSciNetView ArticleGoogle Scholar
- Giles MB, Süli E. Adjoint methods for PDEs: a posteriori error analysis and postprocessing by duality. Acta Numer. 2002;11:145–236.MATHMathSciNetView ArticleGoogle Scholar
- Oden JT, Prudhomme S. Goal-oriented error estimation and adaptivity for the finite element method. Comput Math Appl. 2001;41:735–56.MATHMathSciNetView ArticleGoogle Scholar
- Braack M, Ern A. A posteriori control of modeling errors and discretization errors. Multiscale Model Simul. 2003;1:221–38.MATHMathSciNetView ArticleGoogle Scholar
- Micheletti S, Perotto S, David F. Model adaptation enriched with an anisotropic mesh spacing for nonlinear equations: application to environmental and CFD problems. Numer Math Theor Methods Appl. 2013;6(3):447–78.MATHMathSciNetGoogle Scholar
- Stein E, Rüter M, Ohnimus S. Error-controlled adaptive goal-oriented modeling and finite element approximations in elasticity. Comput Methods Appl Mech Eng. 2007;196:3598–613.MATHView ArticleGoogle Scholar
- Verfürth R. A posteriori error estimate for finite element discretizations of the heat equation. Calcolo. 2003;40:195–212.MATHMathSciNetView ArticleGoogle Scholar
- Cascón JM, Ferragut L, Asensio MI. Space–time adaptive algorithm for the mixed parabolic problem. Numer Math. 2006;103:367–92.MATHMathSciNetView ArticleGoogle Scholar
- Meidner D, Vexler B. Adaptive space–time finite element methods for parabolic optimization problems. SIAM J Control Optim. 2007;46(1):116–42.MATHMathSciNetView ArticleGoogle Scholar
- Micheletti S, Perotto S. Space–time adaptation for purely diffusive problems in an anisotropic framework. Int J Numer Anal Model. 2010;7(1):125–55.MathSciNetGoogle Scholar
- Katz GE, Berkowitz B, Guadagnini A, Saaltink MW. Experimental and modeling investigation of multicomponent reactive transport in porous media. J Contam Hydrol. 2011;120–121:27–44.View ArticleGoogle Scholar
- Lions J-L, Magenes E. Non homogeneous boundary value problems and applications. Berlin: Springer; 1972.View ArticleGoogle Scholar
- Dautray R, Lions J-L. Mathematical analysis and numerical methods for science and technology: evolution problems I. Vol. 5. Berlin: Springer; 1992
- Perotto S. Hierarchical model (Hi-Mod) reduction in non-rectilinear domains. In: Erhel J, Gander M, Halpern L, Pichot G, Sassi T, Widlund O, editors. Lect. notes comput. sci. eng. Vol. 98. Cham: Springer; 2014. p. 477–485
- Aletti M, Perotto S, Veneziani A. Educated bases for the himod reduction of advection–diffusion–reaction problems with general boundary conditions (2015). MOX report, 37/2015.
- Bellhouse BJ, Bellhouse FH, Curl CM, MacMillan TI, Gunning AJ, Spratt EH, MacMurray SB, Nelems JM. A high efficiency membrane oxygenator and pulsatile pumping system and its application to animal trials. Trans Am. Soc. Artif Int Organs. 1973;19:72–9.View ArticleGoogle Scholar
- Hecht F. New developements in freefem++. J Numer Math. 2012;20(3–4):251–65.MATHMathSciNetGoogle Scholar
- Bank RE, Smith RK. A posteriori error estimates based on hierarchical bases. SIAM J Numer Anal. 1993;30:921–35.MATHMathSciNetView ArticleGoogle Scholar
- Dörfler W, Nochetto RH. Small data oscillation implies the saturation assumption. Numer Math. 2002;91:1–12.MATHMathSciNetView ArticleGoogle Scholar
- Achchab B, Achchab S, Agouzal A. Some remarks about the hierarchical a posteriori error estimate. Numer Methods Partial Differ. Equ. 2004;20(6):919–32.MATHMathSciNetView ArticleGoogle Scholar
- Griewank A, Walther A. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans Math Softw. 2000;26(1):19–45.MATHView ArticleGoogle Scholar
- Clément P. Approximation by finite element functions using local regularization. RAIRO Anal Numer. 1975;2:77–84.Google Scholar
- Ainsworth M. A posteriori error estimation for fully discrete hierarchic models of elliptic boundary value problems on thin domains. Numer Math. 1998;80:325–62.MATHMathSciNetView ArticleGoogle Scholar