The benchmark problem
We consider a transient problem defined on a one dimensional domain \(\Omega = [0, L]\), with boundary \(\partial \Omega = \{0;L\}\), over the time interval \(\mathcal {I}= [0,T]\). It is assumed that a prescribed zero temperature is applied on \(\partial \Omega\) and that the domain is subjected to a timedependent thermal loading that consists of a thermal source \(f_d(x,t)\) in \(\Omega\). This loading can be seen as a moving flame/laser beam such that \(f_d(x,t) = \delta (x  vt)\), a Dirac distribution, where v is the velocity of the flame/laser beam (see Figure 1).
For the sake of simplicity, the initial conditions are set to zero. The material that composes \(\Omega\) is assumed to be homogeneous and fully known where c (resp. \(\mu\)) represents the heat capacity (resp. conductivity) of the material.
The associated mathematical problem consists of finding the temperatureflux pair \(\left( u(x,t), \varphi (x,t)\right)\), with \((x,t) \in \Omega \times \mathcal {I}\), that verifies:
In the following, in order to be consistent with other linear problems encountered in Mechanics (linear elasticity for instance), we reform the variable \(\varphi \rightarrow \varphi\) which leads, in particular, to the new constitutive relation \(\varphi = \mu \frac{\mathrm \partial ^{}{u}}{\mathrm \partial {x}^{}}\).
Defining \(\mathcal {V}=H^1_0(\Omega ) = \{{w} \in H^1(\Omega ), {w}_{\partial \Omega } =0\}\), the weak formulation in space of the previous problem reads for all \(t \in \mathcal {I}\):
$$\begin{aligned} \text {Find}\,u(x,t) \in \mathcal {V}\,\mathrm{such\, that }\, b(u,{w}) = l({w}) \quad \forall {w} \in \mathcal {V}\end{aligned}$$
(3)
with \(u_{t=0} = 0\). Bilinear form \(b(\cdot ,\cdot )\) and linear form \(l(\cdot )\) are defined as:
$$\begin{aligned} b(u,{w}) = \int \limits _{\Omega }^{}{\left\{ c\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {t}^{}}{w}+ \mu \frac{\mathrm \partial ^{}{u}}{\mathrm \partial {x}^{}} \frac{\mathrm \partial ^{}{{w}}}{\mathrm \partial {x}^{}} \right\} }\,\text {d}{x}, \quad l({w}) = \int \limits _{\Omega }^{}{\delta (xvt) {w}}\,\text {d}{x} \end{aligned}$$
As regards the full weak formulation, functional spaces \(\mathcal {T}= L^2(\mathcal {I})\) and \(L^2(\mathcal {I};\mathcal {V}) = \mathcal {V}\otimes \mathcal {T}\) are introduced. The solution \(u \in L^2(\mathcal {I};\mathcal {V})\) is therefore searched, with \(\displaystyle {\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {t}^{}} \in L^2(\mathcal {I};L^2(\Omega ))}\), such that:
$$\begin{aligned} B(u,{w}) = L({w}) \quad \forall {w} \in L^2(\mathcal {I};\mathcal {V}) \end{aligned}$$
(4)
with
$$\begin{aligned} B(u,{w}) = \int \limits _{\mathcal {I}}^{}{b(u,{w})}\,\text {d}{t} + \int \limits _{\Omega }^{}{u(x,0^+)w(x,0^+)}\,\text {d}{x}, \quad L({w}) = \int \limits _{\mathcal {I}}^{}{l({w})}\,\text {d}{t} \end{aligned}$$
The exact solution of (4), which is usually out of reach, is denoted \(u_{ex}\) (and \(\varphi _{ex} = \mu \frac{\mathrm \partial ^{}{u_{ex}}}{\mathrm \partial {x}^{}}\)). It may be approximated using the finite element method (FEM), considering time and space discretizations. Using the classical Galerkin approximation, approximate spaces are introduced: \(\mathcal {V}_h \subset \mathcal {V}\) (resp. \(\mathcal {T}_h \subset \mathcal {T}\) ) made from first order polynomials basis of dimension \(N_x\) (resp. \(N_t\)). The resulting \(N_x \times N_t\) linear system leads to a costly bruteforce evaluation when \(N_x\) and/or \(N_t\) are large. Such a solution can be represented as a \(N_x \times N_t\) matrix of the following discrete values:
$$\begin{aligned} \mathbb {U} = \begin{pmatrix} u_h(x_1,t_1) &{} u_h(x_1,t_2) &{} \cdots &{} u_h(x_1,t_{N_t}) \\ u_h(x_2,t_1) &{} u_h(x_2,t_2) &{} \cdots &{} u_h(x_2,t_{N_t}) \\ \vdots &{} &{} \ddots &{} \vdots \\ u_h(x_{N_x},t_1) &{} \cdots &{} \cdots &{} u_h(x_{N_x},t_{N_t}) \end{pmatrix} \end{aligned}$$
Optimal reduced solution
Let us suppose that the full discrete approximated solution \(u_h\) is known (under the form \(\mathbb {U}\)), and one looks to extract basis functions such as \(u_h\) can be approximated under the separated representation form (1).
The problem consists of defining the separated representation \(u_m\) (1) of the solution \(u_h\) of (3) as the one which minimizes the distance to the approximated solution \(u_h\) with respect to a given norm \(\Vert \cdot \Vert\) on \(\mathcal {V}\otimes \mathcal {T}\).
$$\begin{aligned} \left\ u_h(x,t)  u_{m}(x,t) \right\ ^2 = \min _{\psi _i, \lambda _i} \left\ u_h(x,t)  \sum _{i=1}^m \psi _i(x)\lambda _i(t) \right\ ^2 \end{aligned}$$
(5)
where functions \(\psi _i\) and \(\lambda _i\) are the optimal reduced basis functions with respect to the chosen norm.
The SVD is a method that transforms correlated data into uncorrelated ones, identifies and orders the dimensions along which data points exhibit the highest variations. From a mathematical point of view, this method is just a transformation which diagonalizes a given matrix \(\mathbb {A}\) of size \(M\times N\) and brings it to a canonical form [24]:
$$\begin{aligned} \mathbb {A} = \mathbb {W}\mathbb {\Sigma }\mathbb {V}^{\dagger } \end{aligned}$$
where^{a}
\(\mathbb {W}\) (resp. \(\mathbb {V}\)) is a unitary matrix of size \(M \times M\) (resp. \(N \times N\)) consisting of the r first right (resp. left) singular vectors, r being the rank of \(\mathbb {A}\), and \(\mathbb {\Sigma }\) is a diagonal matrix formed by the singular values \(\sigma _i\) of \(\mathbb {A}\) such that \(\sigma _1 \ge \sigma _2 \ge \cdots \ge \sigma _r \ge 0\).
Since singular values are arranged in a specific order, a best approximation \(\mathbb {A}_k\) of the original data points \(\mathbb {A}\) using fewer dimensions can be determined by simply ignoring singular values below a particular threshold (\(k \le r\)) to massively reduce the data while preserving the main behavior of \(\mathbb {A}\). The implied matrix \(\mathbb {A}_k\) of rank k verifies the Eckart–Young theorem according to the Euclidean norm \(\Vert \cdot \Vert _{2}\) [25]:
$$\begin{aligned} \min _{\text {rank}(X) \le k}\Vert \mathbb {A}  X\Vert _{2} = \Vert \mathbb {A}  \mathbb {A}_k \Vert _{2} = \sigma _{k+1} \end{aligned}$$
(6)
Therefore, if the singular values decreases rapidly, a good approximation of \(\mathbb {A}\) with small rank can be found.
For physical problems, the energy norm \(\Vert \cdot \Vert _E\) is the natural norm in which measuring the error between \(u_{ex}\) and \(u_h\). Therefore, to compare SVD and PGD algorithms, we choose the associated energy scalar product, denoted \(\left\langle \!\left\langle \cdot , \cdot \right\rangle \!\right\rangle _E\), that verifies the variable separation property:
$$\begin{aligned} \left\langle \!\left\langle u, {w} \right\rangle \!\right\rangle _E = \left\langle \sqrt{\left\langle u(t), {w}(t) \right\rangle _{\mathcal {V}}}, \sqrt{\left\langle u(t), {w}(t) \right\rangle _{\mathcal {V}}} \right\rangle _{\mathcal {T}} \end{aligned}$$
(7a)
$$\begin{aligned} \left\langle u(x,t), {w}(x,t) \right\rangle _{\mathcal {T}} = \int \limits _{\mathcal {I}}^{}{u(x,t) {w}(x,t)}\,\text {d}{t} \end{aligned}$$
(7b)
$$\begin{aligned} \left\langle u(x,t), {w}(x,t) \right\rangle _{\mathcal {V}} = \int \limits _{\Omega }^{}{\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {x}^{}}\!(x,t) \frac{1}{\mu } \frac{\mathrm \partial ^{}{{w}}}{\mathrm \partial {x}^{}}\!(x,t) }\,\text {d}{x} \end{aligned}$$
(7c)
The implied discrete inner products \(\left\langle \cdot , \cdot \right\rangle _{\mathcal {V}_h}\) and \(\left\langle \cdot , \cdot \right\rangle _{\mathcal {T}_h}\) can be written:
$$\begin{aligned} \left\langle u, {w} \right\rangle _{\mathcal {V}_h} = \varvec{u}^{\dagger } \mathbb {K} \varvec{{w}}, \quad \left\langle u, {w} \right\rangle _{\mathcal {T}_h} = \varvec{u} \mathbb {M} \varvec{{w}}^{\dagger } \end{aligned}$$
where \(\mathbb {K} \in \mathbb {R}^{N_x^2}\) (resp. \(\mathbb {M} \in \mathbb {R}^{N_t^2}\)) is the finite element stiffness matrix (resp. the finite element mass matrix in time) and \(\varvec{u},\varvec{w} \in \mathbb {R}^{N_x \times N_t}\) the FEM solution. Employing a Cholesky factorization, the inner product \(\langle \!\langle \cdot , \cdot \rangle \!\rangle _E\) can be transformed to the standard Euclidean inner product as:
$$\begin{aligned} \Vert u \Vert _{\mathcal {V}_h} = \Vert \sqrt{\mathbb {K}} \varvec{u} \Vert _{2}, \quad \Vert u \Vert _{\mathcal {T}_h} = \Vert \varvec{u} \sqrt{\mathbb {M}} \Vert _{2} \end{aligned}$$
(8)
To avoid the use of the Euclidean norm in (6) and thanks to the transformation (8), ones can compute the reduced SVD according to the energy norm through the regular SVD algorithm on the matrix \(\sqrt{\mathbb {K}}\mathbb {U}\sqrt{\mathbb {M}}\) rather than on the matrix \(\mathbb {U}\). Therefore, the separated variable decomposition
obtained from the regular SVD
verifies:
Matrices
and
that contain the reduced basis functions \(\psi _i\) and \(\lambda _i\) introduced in (1), have to verify the following relations:
Remark 1
The reduced basis functions \(\psi _i \in \mathcal {V}_h\) (resp. \(\lambda _i \in \mathcal {T}_h\)) are orthogonal with respect to the inner product \(\left\langle \cdot , \cdot \right\rangle _{\mathcal {V}_h}\) (resp. \(\left\langle \cdot , \cdot \right\rangle _{\mathcal {T}_h}\)).
Classical PGD methods
Usually, the brute force solution \(u_h\) is out of reach. In this section, classical PGD strategies are reviewed, aiming to approximate the solution u under the form \(u_m\) that verifies (1) without knowledge of the solution u. In these methods, the reduced basis is computed on the fly as the problem is solved.
Rather than solving the m basis functions at once, which is not practical for large scale applications because of prohibitive computational cost, PGD algorithms are built on a progressive algorithm [20]. Assuming that a decomposition \(u_{m}\) is known (previously computed), a new couple \((\psi ,\lambda ) \in \mathcal {V}\times \mathcal {T}\) is searched such that u is of the form (PGD decomposition of order \(m+1\)):
$$\begin{aligned} u(x,t) \simeq u_{m+1}(x,t) = u_{m}(x,t) + \psi (x)\lambda (t) \end{aligned}$$
(9)
In the following and for clarity reasons, the functions dependency is no longer indicated.
Galerkin PGD
First, a PGD computational method based on Galerkin orthogonality [11] and wellsuited for diffusiondominated problems is presented. By injecting the separated variable form (9) in (4), the new couple \((\psi ,\lambda ) \in \mathcal {V}\times \mathcal {T}\) is the optimal one that verifies the Galerkin orthogonality:
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \psi ^*\lambda + \psi \lambda ^*\right) = L\!\left( \psi ^*\lambda + \psi \lambda ^*\right) \quad \forall \psi ^*\in \mathcal {V},\lambda ^* \in \mathcal {T}\end{aligned}$$
(10)
That equation naturally leads to the following problems :

the weak formulation of a partial differential equation (PDE) in space, usually approximated by FEM:
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \psi ^*\lambda \right) = L\!\left( \psi ^*\lambda \right) \quad \forall \psi ^* \in \mathcal {V}\end{aligned}$$
(11)

the weak formulation of ordinary differential equation (ODE) in time:
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \psi \lambda ^* \right) = L\!\left( \psi \lambda ^*\right) \quad \forall \lambda ^* \in \mathcal {T}\end{aligned}.$$
(12)
A couple \((\psi ,\lambda ) \in \mathcal {V}\times \mathcal {T}\) then verifies (10) if and only if \(\psi\) verifies (11) and \(\lambda\) verifies (12), which is a nonlinear problem. As this problem can be interpreted as a pseudoeigenproblem [22], a natural algorithm to capture the dominant eigenfunction consists of a power iterations strategy. Starting from an initial random time function \(\lambda\) (verifying initial conditions), each iteration of the power algorithm verifies a sequence of lower dimensional problems (11–12), leading to Algorithm 1.
Remark 2
The key property of Galerkin computational method is that the error is orthogonal to the approximation spaces. Therefore, \(u_{ex}  u_m\) is orthogonal to \(\mathcal {V}_h \otimes \mathcal {T}_h\) according to \(B(\cdot ,\cdot )\) scalar product.
Remark 3
Normalization of space \(\psi\) or time \(\lambda\) functions is preferable for stability reason of the power iterations algorithm. In Algorithm 1, we arbitrarily choose to normalize the space function \(\psi\).
Remark 4
One could check the convergence of \(\psi \lambda\) to stop the power iterations. As explained in [22], a coarse criterion is sufficient to obtain good approximation as convergence is reached quickly. In practice, we rather prefer to fix a number of subiterations \(k_{max} = 4\), letting the next PGD mode to correct the previous one if necessary.
Improvement of the decomposition
A postprocessing of the computed functions can be made to improve the decomposition, like orthogonalization of space basis functions. A simple modification to the power iterations algorithm consists in updating the whole set of time functions \(\Lambda _m = \left\{ \lambda _i\right\} _{i=1 \ldots m}\) after every nth new PGD mode. These functions are defined by a system of ODEs of dimension m analogous to (12). A mapping application T is defined taking the whole set of space functions \(\Psi _m = \left\{ \psi \right\} _{i=1 \ldots m} \in \mathcal {V}^{m}\) into time functions \(\Lambda _m = T(\Psi _m) \in \mathcal {T}^{m}\) defined by:
$$\begin{aligned} B(\Psi _m \cdot \Lambda _m,\Psi _m \cdot \Lambda _m^*) = L(\Psi _m \cdot \Lambda _m^*) \quad \forall \Lambda _m^* \in \mathcal {T}^m \end{aligned}$$
(13)
This postprocessing improves the quality of the progressive PGD algorithm, although the obtained decomposition is not the optimal one. If the update step is done after each new PGD mode, it is added after the power iterations step, as shown in Algorithm 2. Known as update step [26], it can be applied to all PGD computational methods.
Minimal residual PGD
The next technique is a PGD strategy based on a minimal residual criterion [21]. This construction presents monotonic convergence of the decomposition in the sense of residual norm.
The residual \(\mathcal {R}(u) \in \mathcal {V}\otimes \mathcal {T}\) is defined under the form:
$$\begin{aligned} \left\langle \!\left\langle {w}, \mathcal {R}(u) \right\rangle \!\right\rangle = L({w})  B(u,{w}) = \langle \!\langle {w}, \delta  \underbrace{\left( k\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {x}^{}}+c\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {t}^{}}\right) }_{\mathcal {B}(u)} \rangle \!\rangle \end{aligned}$$
Therefore, an optimal couple \((\psi ,\lambda ) \in \mathcal {V}\times \mathcal {T}\) is defined as the one that minimizes the residual norm:
$$\begin{aligned} (\psi ,\lambda ) = \mathop {{{\mathrm{\arg \min }}}}\limits _{\psi ,\lambda } \Vert \mathcal {R}(u_{m}+\psi \lambda )\Vert ^2 \end{aligned}$$
or equivalently:
$$\begin{aligned} (\psi ,\lambda ) = \mathop {{{\mathrm{\arg \min }}}}\limits _{\psi ,\lambda } \frac{1}{2} \left\langle \!\left\langle \mathcal {B}(\psi \lambda ), \ \mathcal {B}(\psi \lambda ) \right\rangle \!\right\rangle  \left\langle \!\left\langle \mathcal {R}(u_m), \ \mathcal {B}(\psi \lambda ) \right\rangle \!\right\rangle \end{aligned}$$
(14)
In a continuous framework, (14) leads to a nonclassical formulation as it requires the introduction of a more refined functional space of the space domain \(\mathcal {V}\) in order to guaranty existence and uniqueness of solutions (by introducing \(H^2(\Omega ) \otimes H^1(\mathcal {I})\) in place of \(H^1(\Omega ) \otimes L^2(\mathcal {I})\) for our benchmark problem). In practice, a minimal residual formulation is applied to the discretized problem (4) after introducing approximation spaces \(\mathcal {V}_h\) and \(\mathcal {T}_h\).
Therefore, by introducing the discrete \(N_xN_t \times N_xN_t\) matrix \(\mathbb {B}\) (resp. \(N_xN_t\) vector \(\varvec{L}\)) of the bilinear operator \(B(\cdot ,\cdot )\) (resp. the linear operator \(L(\cdot )\)), the discrete \(N_xN_t\) residual vector \(\varvec{R}\) of \(\mathcal {R}(\cdot .)\) takes the form:
$$\begin{aligned} \varvec{R}(u) = \mathbb {B}\varvec{u}  \varvec{L}, \quad \varvec{u} = \begin{bmatrix} \varvec{u}(t_1) \\ \varvec{u}(t_2) \\ \vdots \end{bmatrix} \end{aligned}$$
such that an optimal couple \((\psi ,\lambda ) \in \mathcal {V}_h \times \mathcal {T}_h\) minimizes the discretized residual norm (Euclidean norm, since the matrix \(\mathbb {B}\) is not invertible):
$$\begin{aligned} (\psi ,\lambda ) = \mathop {{{\mathrm{\arg \min }}}}\limits _{\psi ,\lambda } \Vert \varvec{R}(u_m + \psi \lambda )\Vert _{2}^2 \end{aligned}$$
By introducing notation \(\varvec{u}_{m+1} = \varvec{u}_{m} + \varvec{\psi }\cdot \varvec{\lambda }\), the minimization leads to the problem:
$$\begin{aligned} \left( \varvec{\psi }^*\cdot \varvec{\lambda }+\varvec{\psi }\cdot \varvec{\lambda }^*\right) ^T\left[ \mathbb {B}^T\mathbb {B}\left( \varvec{u}_m + \varvec{\psi }\cdot \varvec{\lambda }\right)  \mathbb {B}^T\varvec{L}\right] = 0 \end{aligned}$$
(15)
Let us define the \(N_x \times N_x\) matrices \(\mathbb {C}_{ij}\) and the \(N_x\) vectors \(\varvec{F}_i\) such that:
$$\begin{aligned} \mathbb {B}^T\mathbb {B} = \begin{bmatrix} \mathbb {C}_{11}&\mathbb {C}_{12}&\cdots \\ \mathbb {C}_{21}&\ddots&\\ \vdots&&\ddots \\ \end{bmatrix}, \quad \mathbb {B}^T\varvec{L} = \begin{bmatrix} \varvec{F}_1 \\ \varvec{F}_2 \\ \vdots \end{bmatrix} \end{aligned}$$
The discrete minimization (15) leads to the following reduced problems:
Remark 5
As for all Galerkin based methods, the error \(u_{ex}u_m\) is orthogonal to the approximation space \(\mathbb {B}^T\mathbb {B}\left( \mathcal {V}_h\otimes \mathcal {T}_h\right)\) according to the Euclidean scalar product, and depends strongly on the conditioning of the operator \(\mathbb {B}^T\mathbb {B}. \)
Petrov–Galerkin PGD
In this section, another possible PGD computational method is presented, based on a Petrov–Galerkin criterion, also called MinMax [22]. Such a formulation is frequently used for solving PDEs which contain terms with odd order, implying a loss of symmetry in the weak formulation, as transportdominated problems.
This Petrov–Galerkin formulation uses the weak formulation (4) where the unknown u and test function w are in different finite dimensional subspaces. Under the variable separation assumption, the test function w is taken as another couple of space and time functions \((\tilde{\psi },\tilde{\lambda }) \in \mathcal {V}\times \mathcal {T}\) such that the new PGD couple \((\psi ,\lambda ) \in \mathcal {V}\times \mathcal {T}\) is the optimal one that verifies the Galerkin orthogonality:
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \tilde{\psi }^*\tilde{\lambda } + \tilde{\psi }\tilde{\lambda }^*\right) = L\!\left( \tilde{\psi }^*\tilde{\lambda } + \tilde{\psi }\tilde{\lambda }^*\right) \quad \forall \tilde{\psi }^*\in \mathcal {V},\tilde{\lambda }^* \in \mathcal {T}\end{aligned}$$
(18)
This leads to the two following orthogonality criteria:
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \tilde{\psi }^*\tilde{\lambda }\right) = L\!\left( \tilde{\psi }^*\tilde{\lambda }\right) \quad \forall \tilde{\psi }^*\in \mathcal {V} \end{aligned}$$
(19a)
$$\begin{aligned} B\!\left( u_m + \psi \lambda , \tilde{\psi }\tilde{\lambda }^*\right) = L\!\left( \tilde{\psi }\tilde{\lambda }^*\right) \quad \forall \tilde{\lambda }^*\in \mathcal {T} \end{aligned}$$
(19b)
As a matter of fact, additional equations must be added in order to define the new functions \((\tilde{\psi },\tilde{\lambda }) \in \mathcal {V}\times \mathcal {T}\). The following orthogonality criteria (20a), (20b) are used, where \(\left\langle \!\left\langle \cdot , \cdot \right\rangle \!\right\rangle\) is an inner product on \(\mathcal {V}\otimes \mathcal {T}\). To compare PGD computational methods one another, we select the energy inner product (7a)–(7c).
$$\begin{aligned} B\!\left( \psi ^*\lambda , \tilde{\psi }\tilde{\lambda }\right) = \left\langle \!\left\langle \psi ^*\lambda , \psi \lambda \right\rangle \!\right\rangle _E \quad \forall \psi ^*\in \mathcal {V} \end{aligned}$$
(20a)
$$\begin{aligned} B\!\left( \psi \lambda ^*, \tilde{\psi }\tilde{\lambda }\right) = \left\langle \!\left\langle \psi \lambda ^*, \psi \lambda \right\rangle \!\right\rangle _E \quad \forall \lambda ^*\in \mathcal {T} \end{aligned}$$
(20b)
As for previous algorithms, an approximation of \((\psi ,\tilde{\psi },\lambda ,\tilde{\lambda }) \in \mathcal {V}^2\times \mathcal {T}^2\) is computed to verify (18) and (20a), (20b) simultaneously. As such a problem is nonlinear, a power iterations strategy is chosen, where the four lower dimension problems are solved iteratively one after the other (see Algorithm 4).
Remark 6
As for all Galerkin based methods, the error \(u_{ex}u_m\) is orthogonal to an approximation space \(\mathcal {L} = \left\{ \tilde{\psi }_i\otimes \tilde{\lambda }_i\right\} _{i = 1 \ldots m}\) of the test space according to the \(B(\cdot ,\cdot )\) scalar product.
Minimisation of the CRE
To control the PGD computational process, specific error indicators have been built in recent years. They particularly assess the error due to truncation of the sum in the separated decomposition (1). A first robust approach for PGD verification, using the concept of CRE, was proposed in [27, 28] leading to guaranteed and relevant error evaluation. It specificity lies on the way to construct the required dual fields, as \({\varphi (u_m)} = \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}}\) does not verify equilibrium (3). The solution leads to a variable separation of \({\varphi (u_m)}\) with a static formulation, done from and after the classical Galerkin PGD space problem. The obtained data combine with prescribed ones, as displacements, traction and body forces define the starting point to evaluate the reduction error.
In this error estimator, the required dual field is computed after the PGD procedure and then cannot influence the solution. Here we propose to compute and control the PGD procedure on the fly with the CRE estimator.
The equilibrium is reformulated by introducing flux fields \(\tau\) and Q such that:
$$\begin{aligned} \int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{c\frac{\mathrm \partial ^{}{u}}{\mathrm \partial {t}^{}}{w} +\underbrace{\Big (\varphi  Q\Big )}_{\tau } \frac{\mathrm \partial ^{}{{w}}}{\mathrm \partial {x}^{}}}\,\text {d}{x}}\,\text {d}{t} = 0 \quad \forall {w} \in \mathcal {V}\otimes \mathcal {T}\end{aligned}$$
(21)
We wish to minimize the CRE estimator (22), with the constraint that the flux \(\tau\) verifies equilibrium (21), with u a kinematically admissible field written with separated representation form.
$$\begin{aligned} \left( u(x,t),\tau (x,t)\right) = \mathop {{{\mathrm{\arg \min }}}}\limits _{u,\tau } \int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{\frac{1}{\mu }\left[ \tau (x,t) + Q(x,t)  \mu \frac{\mathrm \partial ^{}{u}}{\mathrm \partial {x}^{}}\!(x,t) \right] ^2}\,\text {d}{x}}\,\text {d}{t} \end{aligned}$$
(22)
As the flux \(\tau\) has to verify the equilibrium (21), \(\tau\) can also be written under the variable separation form:
$$\begin{aligned} \tau (x,t) \simeq \tau _m(x,t) = \sum _{i=1}^m \frac{\mathrm d^{}{\lambda _i}}{\mathrm d{t}^{}}\!(t) Z_i(x) \end{aligned}$$
(23)
Indeed, by injecting assumption (1) into equilibrium (21) with a progressive algorithm solver, \(\tau\) has to verify, at each PGD stage m, the following equation:
$$\begin{aligned} \forall k, {\forall t,} \quad \int \limits _{\Omega }^{}{\sum _{i=1}^k \left[ c\frac{\mathrm d^{}{\lambda _i}}{\mathrm d{t}^{}}\psi _i\right] w + \tau \frac{\mathrm \partial ^{}{w}}{\mathrm \partial {x}^{}}}\,\text {d}{x} = 0, \quad \forall w \in \mathcal {V}\end{aligned}$$
(24)
The chosen separated form (23) leads to an interesting condition between \(Z_i\) and \(\psi _i\):
$$\begin{aligned} \forall i, {\forall t,} \quad \int \limits _{\Omega }^{}{Z_i(x) \frac{\mathrm \partial ^{}{w}}{\mathrm \partial {x}^{}} + c \psi _i(x)w}\,\text {d}{x} = 0, \quad \forall w \in \mathcal {V}\end{aligned}$$
(25)
Therefore, the progressive algorithm adopted in “Classical Proper Generalized Decomposition methods” also applies on \(\tau\). It assumes that decomposition \(u_{m}\) and \(\tau _m\) are known (previously computed) and it looks for a new couple \((\psi ,\lambda ,Z)\) such that u verifies (9) and \(\tau\) verifies:
$$\begin{aligned} \tau (x,t) \simeq \tau _{m+1}(x,t) = \tau _{m}(x,t) + \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}}\!(t)Z(x) \end{aligned}$$
(26)
A full minimization
Minimizing the CRE (22) under conditions (25) leads to minimizing the following equation for \(\psi \in \mathcal {V}\), \(\lambda \in \mathcal {T}\) and Z where the constraint is taken into account through Lagrange multipliers:
$$\begin{aligned} \left( \psi ,\lambda ,Z,{w}\right) &= {\arg \min _{\psi ,\lambda ,Z} \max _{w}} \!\int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{\frac{1}{\mu }\left( Q + \tau _m  \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} + \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}}Z  \mu \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\lambda \right) ^2\!\!}\,\text {d}{x}}\,\text {d}{t} + \int \limits _{\Omega }^{}{Z\frac{\mathrm d^{}{{w}}}{\mathrm d{x}^{}}+c\psi {w}}\,\text {d}{x} \end{aligned}$$
(27)
Solving the saddle point problem (27) according to each unknown leads to:

According to \(\lambda\):
$$\begin{aligned} 0 &= \int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{\frac{1}{\mu }\left( Q + \tau _m  \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} + \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}} Z  \mu \lambda \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\right) \!\left( \frac{\mathrm d^{}{\lambda ^*}}{\mathrm d{t}^{}}Z  \mu \lambda ^*\frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}} \right) }\,\text {d}{x}}\,\text {d}{t} \qquad \forall \lambda ^* \in \mathcal {T}\end{aligned}$$
(28a)

According to \(\psi\):
$$\begin{aligned} 0 &= 2\int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{\left( \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}}  Q  \tau _m  \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}} Z + \mu \lambda \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\right) \lambda \frac{\mathrm d^{}{\psi ^*}}{\mathrm d{x}^{}}}\,\text {d}{x}}\,\text {d}{t} + \int \limits _{\Omega }^{}{c \psi ^* {w} }\,\text {d}{x} \quad \forall \psi ^* \in \mathcal {V}\end{aligned}$$
(28b)

According to Z:
$$\begin{aligned} 0 &= 2\int \limits _{\mathcal {I}}^{}{\!\!\int \limits _{\Omega }^{}{\frac{1}{\mu }\left( Q + \tau _m  \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} + \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}} Z  \mu \lambda \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\right) \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}}Z^*}\,\text {d}{x}}\,\text {d}{t} + \int \limits _{\Omega }^{}{Z^* \frac{\mathrm \partial ^{}{{w}}}{\mathrm \partial {x}^{}}}\,\text {d}{x} \quad \forall Z^* \in \mathcal {S} \end{aligned}$$
(28c)
where \(\mathcal {S} = \left\{ w \in L^2(\Omega ) \mid \forall {g} \in L^2(\Omega ), \int \limits _{\Omega }^{}{{g} \frac{\mathrm d^{}{{w}}}{\mathrm d{x}^{}}}\,\text {d}{x} = 0\right\}\).

According to w:
$$\begin{aligned} 0 = \int \limits _{\Omega }^{}{Z \frac{\mathrm \partial ^{}{{w}^*}}{\mathrm \partial {x}^{}} + c \psi {w}^*}\,\text {d}{x} \quad \forall {w}^* \in \mathcal {V}\end{aligned}.$$
(28d)
As (28b)–(28d) are linked together, discrete finite element solutions \((\psi _h,Z_h,{w}_h)\) are determined at once by inverting a matrix problem of thrice the problem size in space:
$$\begin{aligned} {\mathbb {B} \begin{pmatrix} \varvec{\psi }_h \\ \varvec{Z}_h \\ \varvec{{w}}_h \end{pmatrix} = \varvec{L}} \end{aligned}$$
It should be noted that contrary to the usual case of CRE, discretization errors are not taken into account there. As a matter of fact, very refined space and time meshes will be taken to omit it.
A partial minimization
The full minimization leads to invert a huge problem. In the general case, it is computationally expensive. We propose here a partial minimization where condition (21) is not taken into account in the minimization process but afterwards. That leads to a new and promising PGD computational strategy.
Minimizing (22) under constraint (25) leads to the minimization in \(\mathcal {V}\otimes \mathcal {T}\otimes \mathcal {S}\) of:
$$\begin{aligned} \left( \psi ,\lambda ,Z\right) = \mathop {{{\mathrm{\arg \min }}}}\limits _{\psi ,\lambda ,Z} \int \limits _{\mathcal {I}}^{}{\int \limits _{\Omega }^{}{\frac{1}{\mu }\left( Q + \tau _m  \mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} + \frac{\mathrm d^{}{\lambda }}{\mathrm d{t}^{}}Z  \mu \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\lambda \right) ^2}\,\text {d}{x}}\,\text {d}{t} \end{aligned}$$
(29)
Minimization (29) leads to:

According to \(\lambda\):
$$\begin{aligned} 0 &= \int \limits _{\mathcal {I}}^{}{\int \limits _{\Omega }^{}{\mu \left( Q + \tau _m +\frac{\mathrm \partial ^{}{\lambda }}{\mathrm \partial {t}^{}}Z  \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}}  \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\lambda \right) \left( Z\frac{\mathrm d^{}{\lambda ^*}}{\mathrm d{t}^{}}  \mu \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\lambda ^*\right) }\,\text {d}{x}}\,\text {d}{t} \qquad \forall \lambda ^* \in \mathcal {T}\end{aligned}$$
(30)

According to \(\psi\). The field Z is fixed to the previous value computed at the previous iteration of the power iterations algorithm in first place:
$$\begin{aligned} 0 = \int \limits _{\mathcal {I}}^{}{\int \limits _{\Omega }^{}{\left( \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} + \frac{\mathrm d^{}{\psi }}{\mathrm d{x}^{}}\lambda  Q + \tau _m +\frac{\mathrm \partial ^{}{\lambda }}{\mathrm \partial {t}^{}}Z\right) \lambda \frac{\mathrm d^{}{\psi ^*}}{\mathrm d{x}^{}}}\,\text {d}{x}}\,\text {d}{t} \qquad \forall \psi ^* \in \mathcal {V}\end{aligned}$$
(31)

One can then determine Z to verify (25).
Remark 7
The CRE quantity \(\varphi _{m}\mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}} = \tau _{m}+Q\mu \frac{\mathrm \partial ^{}{u_m}}{\mathrm \partial {x}^{}}\) is orthogonal to \(\mathcal {V}\otimes \mathcal {T}\) and \(\mathcal {S}\otimes \mathcal {T}\) approximation spaces according to the energy scalar product.
Remark 8
For stationary problems, condition (24) does not apply. The idea is to seek \(\tau \in \mathcal {S}_0\) of the form:
$$\begin{aligned} \tau \simeq \tau _m = \sum _{i=1}^m \tilde{Z}_i(x,t) \end{aligned}$$
where each \(\tilde{Z}_i \in \mathcal {S}_0 = \left\{ w \in L^2(\Omega ,\mathcal {I}) \mid \forall z \in L^2(\Omega ,\mathcal {I}), \int \limits _{\mathcal {I}}^{}{\int \limits _{\Omega }^{}{w \frac{\mathrm \partial ^{}{z}}{\mathrm \partial {x}^{}}}\,\text {d}{x}}\,\text {d}{t} = 0\right\}\). Since potential and complementary energies are decoupled, each \(\tilde{Z}_i\) is decoupled from \(\psi _i\).
A nonseparable lifting algorithm
In this section, a solution of the benchmark problem is computed under the sum of two contributions:

an enrichment function, determined analytically in an infinite domain, which does not verify the separated decomposition;

an additional function, that aims at verifying the boundary conditions, and which verifies variables separation.
For the specific benchmark problem defined in (2a)–(2d), we look for the solution in an infinite domain without space boundary conditions. Let us set \(X=xvt\) such that \(U(X) = U(xvt) = u(x,t)\). The problem then reads:
$$\begin{aligned} \left\{ \begin{array}{l} cv \frac{\mathrm d^{}{U}}{\mathrm d{X}^{}} + \mu \frac{\mathrm d^{2}{U}}{\mathrm d{X}^{2}} =  \delta (X) \\ U(x) = 0 \end{array}\right. \end{aligned}$$
The solution of such an ODE is straightforward, leading to the following equation, where \(\theta\) stands for the Heaviside function and K is a real constant.
$$\begin{aligned} u_{\infty }(x,t,K) = \frac{\theta (x)+K}{cv}\left( 1  \exp ^{\frac{cv}{ \mu }x}\right)  \frac{\theta (xvt)+K}{cv}\left( 1  \exp ^{\frac{cv}{ \mu }\left( xvt\right) }\right) \end{aligned}$$
Therefore, a dedicated algorithm is proposed, called Lifting PGD, where the solution is computed with one of the previous PGD algorithms initialized by \(u_{\infty }\) with \(K=0\), i.e. removing the homogeneous solution. To respect boundary conditions in space, \(u_{\infty }\) is multiplied with adhoc functions. As the solution u is equal to \(u_{\infty }\) far from the boundary conditions, u is searched under a specific form to correct \(u_{\infty }\) near the boundary conditions.
$$\begin{aligned} u(x,t) \approx u_{\infty }(x,t,0) \times \delta (x) \times \delta (Lx) + u_m(x,t) \end{aligned}$$
To compute the space and time basis functions \(\psi _i\) and \(\lambda _i\), one can select one of the previously presented PGD strategies.