Skip to main content
  • Research Article
  • Open access
  • Published:

Recursive POD expansion for reaction-diffusion equation

Abstract

This paper focuses on the low-dimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasi-optimal approximation in \(L^2\) norm. We also prove an exponential rate of convergence, when applied to the solution of the reaction-diffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reaction-diffusion equation.

Background

Model Reduction methods are nowadays basic numerical tools in the treatment of large-scale parametric problems appearing in real-world problems. They are applied with success, for instance, in signal processing, analysis of random data, solution of parametric partial differential equations and control problems, among others. In signal processing, Karhunen-Loève’s expansion (KLE) provides a reliable procedure for a low dimensional representation of spatiotemporal signals (see [12, 20]). Different research communities use different terminologies for the KLE. It is named the proper orthogonal decomposition (POD) in mechanical computation (see [3]), referred to as the principal components analysis (PCA) in statistics (see [17, 18, 24]) and data analysis or called singular value decomposition (SVD) in linear algebra (see [13]). These techniques allow large reduction of computational costs, thus making affordable the solution of many parametric problems of practical interest, otherwise out of reach. Let us mention some representative references [5, 6, 11, 16, 26, 27], although this list, by far, is not exhaustive.

The extension of KLE to the tensor representation of multivariate functions is, however, a challenging problem. Real problems are quite often multivariate. Let us mention, for instance the analysis of multivariate stochastic variables, simulation and control of thermal flows and multi-component mechanics, among many others. Some recent techniques have been introduced to build low-dimensional tensor decompositions of multivariate functions and data. Among them, the High-Order Singular Value Decomposition (HOSVD) provides low-dimensional approximation of tensor data, in a similar way as the Singular Value Decomposition allows to approximate bi-variate data (see [7, 8, 21]). Also, the Proper Generalized Decomposition (PGD) appears to be well suited in many cases to approximate multivariate functions by low-dimensional varieties (see [14, 15]). However, in general there is not an optimal tensor of rank three or larger to approximate a given high-dimensional tensor (see [9]).

In this paper we study an alternative method to build low-dimensional tensor decompositions of multivariate functions. This is a recursive POD (R-POD), based upon the successive application of the bivariate POD to each of the modes obtained in the previous step. In each step only one of the parameters is active, while the set of the remaining parameters is considered as a passive single parameter. We introduce a feasible version of the R-POD, in which the expansion is truncated whenever the singular values are smaller than a given threshold. This provides a fast algorithm, as only a small number of modes is computed, just those required to achieve a targeted error level.

As an application, we analyze the velocity of convergence of the R-POD applied to approximate the solution of the reaction-diffusion equation. We prove that the expansion converges with exponential rate. We use as main theoretical tool the Courant-Fischer-Weyl Theorem, that allows to reduce the error analysis of the POD expansion to that of the polynomial approximation of the function to be expanded. We also use the analytic dependence of the solution on the diffusivity and reaction rate coefficients, that yields the exponential convergence rate. This analysis is based upon the one introduced in [2]. Further, we use sub-sequent bounds for the singular values to construct a practical truncation error estimator, which is used to recursively compute the expansion by the Power Iterate (PI) method [1]. This avoids to compute the full singular value decomposition of the correlation matrix, just the modes needed to attempt a given error threshold are computed. The PI method provides a fast and reliable tool to build the POD expansion of bivariate functions, of which we take advantage to recursively build the R-POD expansion.

We present a battery of numerical tests, in which we apply the R-POD to the representation of three-variate functions, and in particular to the solution of the reaction-diffusion. We compare the R-POD to the PGD expansion. The PGD expansion can be interpreted as the PI method applied to the effective computation of the POD for bivariate functions (see [23]). We here consider its extension to multivariate functions. We obtain exponential convergence rates for both R-POD and PGD expansions, although the R-POD is in general more accurate than the PGD for the same number of modes. We also recover an exponential rate of convergence for the R-POD approximation of the solution of the reaction-diffusion equation, in quite good agreement with the theoretical expectations.

The guidelines of the paper are as follows. “The Karhunen-Loève decomposition on Hilbert spaces” section recalls the POD or Karhunen-Loève expansion in Hilbert spaces. In “Recursive POD representation” section, we introduce the recursive POD decomposition of multivariate functions, and make a general error analysis. “Analysis of solutions of the reaction-diffusion” section deals with the error analysis for the R-POD expansion of the solution of the reaction-diffusion equation. Finally, in “Numerical tests” section we present some numerical tests where we analyze the practical performances of the recursive POD decomposition of multivariate functions.

Notation—Let \(X\subset {\mathbb R}^d\) be a given Lipschitz domain and G a measure space. We denote by \(L^2(G,X)\) the Bochner space of measurable and square integrable functions from G on X (cf. [10]).

The Karhunen-Loève decomposition on Hilbert spaces

The Karhunen-Loève decomposition, also known as Proper Orthogonal decomposition (POD in the sequel) provides a technique to obtain low-dimensional approximations of parametric functions. To describe it, let us consider a Hilbert space H endowed with a scalar product \((\cdot ,\cdot )_H\), and a parameter measure space G. Let us consider a function \(f \in L^2(G, H)\), and introduce the POD operator

$$\begin{aligned} A: H \mapsto H, \quad \text{ A }v= \int _Gf(\gamma )\, (f(\gamma ),v)_H\, d\gamma \quad \text{ for } v \in H . \end{aligned}$$

POD operator is linear and bounded. Moreover, it is self-adjoint and non-negative. Indeed, it holds \(A=B^*B\), where \(B:H \mapsto L^2(G)\) and its adjoint operator \(B^*:L^2(G) \mapsto H\) are given by

$$\begin{aligned}&(Bv)(\gamma )=(f(\gamma ),v)_H \quad \text{ for } v \in H, \nonumber \\&B^* \varphi = (\varphi ,f)_{L^2(G)}= \int _G f(\gamma )\, \varphi (\gamma )\, d\gamma \quad \text{ for } \varphi \in L^2(G). \end{aligned}$$
(1)

Furthermore, the operator A is compact. This arises because the operator B is compact by the Kolmogorov compactness criterion in \(L^2(G)\) (cf. Muller [22], Chapter 2).

Consequently, there exists a complete orthonormal basis of H formed by eigenvectors \((v_m)_{m \ge 0}\) of A, associated to non-negative eigenvalues \((\lambda _m)_{m \ge 0}\), that we assume to be ordered in decreasing value. Each non-zero eigenvalue has a finite multiplicity, and 0 is the only possible accumulation point of the spectrum. If H is infinite-dimensional, then \(\lim _{m \rightarrow \infty } \lambda _m =0\).

Moreover, consider the correlation operator \(C=BB^*:L^2(G) \mapsto L^2(G)\),

$$\begin{aligned} \text{ C }\varphi (\gamma )= \int _G (f(\gamma ), f(\mu ))_H\, \varphi (\mu )\, d\mu \quad \text{ for } \varphi \in L^2(G) . \end{aligned}$$

Then the sequence \((\varphi _m)_{m \ge 0}\), with

$$\begin{aligned} \varphi _m(\gamma )= \frac{1}{\sigma _m} (Bv_m)(\gamma )= \frac{1}{{\sigma _m}}\, (f(\gamma ),v_m)_H, \,\sigma _m=\sqrt{\lambda _m} \,(\text{ it } \text{ also } \text{ holds } v_m=\frac{1}{\sigma _m} \, B^* \varphi _m)\nonumber \\ \end{aligned}$$
(2)

is an orthogonal basis of \(L^2(G)\). This yields the abstract Karhunen-Loève decomposition,

Corollary 0.1

It holds

$$\begin{aligned} f(\gamma )=\sum _{m \ge 0} \sigma _m \, \varphi _m(\gamma )\, v_m, \quad \text{ a. } \text{ e. } \text{ in } G, \end{aligned}$$

where the series is convergent in \(L^2(G,H)\).

The main interest of the POD is the following best-approximation property (cf. [22], Chapter 2):

Lemma 0.2

Let \(V_l=Span(\varphi _1,\ldots ,\varphi _l)\subset H\). Let \(W_l\) be any sub-space of H of dimension l. Then

$$\begin{aligned} \int _G d_H(f(\gamma ),V_l)^2\, d\gamma \le \int _G d_H(f(\gamma ),W_l)^2\, d\gamma , \end{aligned}$$

where

$$\begin{aligned} d_H(v,W_l)=\inf _{w \in W_l} \Vert v-w\Vert _H\quad \text{ for } v \in H \end{aligned}$$

denotes the distance from the element \(\varphi \in H\) to the sub-space \(W_l\).

Let us next consider the case of bivariate functions. Assume that \(X \subset {\mathbb R}^d\) and \(Y \subset {\mathbb R}^s\) are two bounded domains, d and s are integers \(\ge 1\). Let T be a given function in \( L^2(X\times Y)\), that we want to approximate in a low-dimensional variety. Let us consider the integral operator with kernel T expressed as

$$\begin{aligned} v\mapsto B\,v,\,\, (B\,v)(y)=\int _X T(x,y)\,v(x)\; dx. \end{aligned}$$
(3)

The operator B maps \(L^2(X)\) into \(L^2(Y)\), is bounded and has an adjoint operator \(B^*\) defined from \(L^2(Y)\) into \(L^2(X)\) as

$$\begin{aligned} \varphi \mapsto B^*\,\varphi , \,\, (B^*\,\varphi )(x) = \int _{Y} T(x,y )\,\varphi (y)dy. \end{aligned}$$
(4)

We are thus in a particular case of the previous abstract setting, with \(G=Y\), \(H=L^2(X)\) and \(f(\gamma )(x)=T(x,\gamma )\).

Recursive POD representation

The POD expansion may be recursively adapted to the representation of multi-parametric functions. Let us consider the case of trivariate functions to avoid unnecessary complexities. Consider a bounded domain \(Z \subset {\mathbb R}^q\) for some integer number \(q \ge 1\), and a trivariate function \(T\in L^2(X \times Y \times Z)\). We identify T with a function of \(L^2(Y \times Z,L^2(X))\) as both spaces are isometric. From Corollary 0.1 we deduce that there exist two orthonormal sets \((v_m)_{m \ge 0}\) and \((\varphi _m)_{m \ge 0}\) which are respectively complete in \(L^2(X)\) and in \(L^2(Y \times Z)\) such that T admits the representation

$$\begin{aligned} T(x,y,z)=\sum _{m \ge 0} \sigma _m \varphi _m(y,z) \,v_m(x), \end{aligned}$$
(5)

where the sum is convergent in \(L^2(Y \times Z,L^2(X))\). Moreover the singular values \(\sigma _m \) (that we assume to be ordered in decreasing value) are non-negative and converge to zero.

We next apply the POD expansion to each mode \(\varphi _m(y,z)\). There exists two orthonormal sets \((u_k^{(m)})_{k \ge 1}\) and \((w_k^{(m)})_{k \ge 1}\) which are respectively complete in \(L^2(Y)\) and \(L^2(Z)\), such that \(\varphi _m\) admits the representation

$$\begin{aligned} \varphi _m(y,z)=\sum _{k \ge 0} \sigma _{k}^{(m)}\, u_k^{(m)}(y)\,w_k^{(m)}(z) , \end{aligned}$$
(6)

where the expansion is convergent in \(L^2(Z,L^2(Y))\), which is isometric to \(L^2(Y \times Z)\). Also, the singular values \( (\sigma _{k}^{(m)})_{k \ge 0}\) are non-negative and decrease to zero. We then haves

Lemma 0.3

The function \(T \in L^2(X \times Y \times Z)\) admits the expansions

$$\begin{aligned} T=\sum _{m \ge 0} \sum _{k \ge 0} \sigma _m \, \sigma _{k}^{(m)}\, v_m\otimes u_k^{(m)}\otimes w_k^{(m)}=\sum _{m \ge 0} \sigma _m \, v_m\otimes \left( \sum _{k \ge 0} \sigma _{k}^{(m)}\, u_k^{(m)}\otimes w_k^{(m)}\,\right) ,\nonumber \\ \end{aligned}$$
(7)

where both sums are convergent in \(L^2(X \times Y \times Z)\).

Proof

It is enough to prove that any of both series is absolutely convergent. Consider the partial absolute sum for the first one

$$\begin{aligned} S_N=\sum _{0 \le m \le N} \sum _{ 0\le k \le N}\Vert \sigma _m \, \sigma _{k}^{(m)}\, v_m\otimes u_k^{(m)}\otimes w_k^{(m)} \Vert _{L^2(X \times Y \times Z)}^2. \end{aligned}$$

As the eigenfunctions are orthonormal (in their corresponding spaces),

$$\begin{aligned} S_N=\sum _{0 \le m \le N} \sum _{ 0\le k \le N} |\sigma _m|^2 \, |\sigma _{k}^{(m)}|^2 \le \sum _{0 \le m \le N} |\sigma _m|^2 \le \Vert T\Vert _{L^2(X \times Y \times Z)}^2, \end{aligned}$$

where the first inequality holds because \(\Vert \varphi _m\Vert _{L^2(Y \times Z)}=1\), and then \(\displaystyle \sum _{ k \ge 0}|\sigma _{k}^{(m)}|^2=1\). \(\square \)

Feasible recursive POD representation

To build up a feasible recursive POD (R-POD) representation, consider a partial sum of the POD representation (7),

$$\begin{aligned} T _{P_M}=\sum _{0 \le m \le M} \sigma _m \,v_m\otimes \left( \sum _{0 \le k \le K_m} \sigma _{k}^{(m)}\, u_k^{(m)}\otimes w_k^{(m)}\,\right) , \end{aligned}$$
(8)

for some given integers \(K_1 \ge 1,\ldots , K_M \ge 1\). The notation \(P_M\) is a short for the multi-index \((M,K_1,\ldots ,K_M)\). We have

$$\begin{aligned} \Vert T-T_{P_M}\Vert _{L^2(X \times Y \times Z)}^2= & {} \sum _{ m \ge M+1} |\sigma _m|^2\,\left( \sum _{ k \ge 0}|\sigma _{k}^{(m)}|^2\,\right) + \sum _{ 0 \le m \le M} |\sigma _m|^2\,\left( \sum _{ k \ge K_m+1}|\sigma _{k}^{(m)}|^2\,\right) \nonumber \\\le & {} \sum _{ m \ge M+1} |\sigma _m|^2 + \sum _{ 0 \le m \le M} |\sigma _m|^2\,\left( \sum _{ k \ge K_m+1}|\sigma _{k}^{(m)}|^2\,\right) \end{aligned}$$
(9)

This estimate suggests a practical strategy for the IP method to construct the expansion (8) within a targeted error:

Algorithm FR-POD (Feasible recursive POD representation)

Assume that some estimates of the remainders are computable:

$$\begin{aligned} \sum _{ m \ge M+1} |\sigma _m|^2 \le |\alpha _M|^2, \,\,\sum _{ k \ge K_m+1}|\sigma _{k}^{(m)}|^2 \le |\beta _K^{(m)}|^2. \end{aligned}$$
(10)

Set a tolerance \(\varepsilon > 0\). Let

$$\begin{aligned} A=1/\sqrt{2}, \quad B=1/(\sqrt{2}\,\Vert T\Vert _{L^2(X \times Y \times Z)}). \end{aligned}$$
(11)
  • Step 1: Compute the modes \(\varphi _m\) and \(v_m\) and singular values \(\sigma _m\) for \(m=1,\ldots ,M_\varepsilon \), until \(\alpha _{M_\varepsilon } \le A \, \varepsilon \).

  • Step 2: For each \(m=1,\ldots , M_\varepsilon \), compute the modes \(u_m^{(k)}\) and \(w_k^{(m)}\) and the singular values \(\sigma _m^{(k)}\) for \(k=1,\ldots ,K_m\), until \(\beta _{K_m}^{(m)} \le B \, \varepsilon \).

For smooth functions the singular values decrease very fast, so that good estimators of the remainders are \(\alpha _M = \sigma _{M+1}\), \(\beta _K^{(m)} = \sigma _{K+1}^{m}\). For less smooth functions, some more summands of the series defining the remainders could be need. In “Analysis of solutions of the reaction-diffusion” section we shall obtain estimators \(\alpha _M \) and \(\beta _K^{(m)} \) when T is the solution of the reaction-diffusion, considered as a function depending on three parameters: The diffusivity, the reaction speed and the space-time variable. These estimators decrease exponentially with M.

Lemma 0.4

Let \(T_{\varepsilon }\) the representation of T provided by Algorithm FR-POD within an error level \(\varepsilon \). It holds

$$\begin{aligned} \Vert T-T _{\varepsilon }\Vert _{L^2(X \times Y \times Z)} \le \varepsilon . \end{aligned}$$
(12)

Proof

From estimate (9) and (10),

$$\begin{aligned} \Vert T-T_{P_M}\Vert _{L^2(X \times Y \times Z)}^2\le & {} |\alpha _{M_\varepsilon }|^2 + \sum _{ 0 \le m \le M} |\sigma _m|^2\, |\beta _{K_m}^{(m)}|^2\le (A^2 + \Vert T\Vert _{L^2(X \times Y \times Z)}^2 \, B^2)\,\varepsilon ^2 < \varepsilon ^2, \nonumber \end{aligned}$$

where we have used that \( \displaystyle \sum _{ 0 \le m \le M} |\sigma _m|^2 \le \Vert T\Vert _{L^2(X \times Y \times Z)}^2\). \(\square \)

In practice we recursively compute the expansion by the PI method [1]. This avoids to compute the full singular value decomposition of the correlation matrix, we just compute the modes needed to reach a given error threshold.

Quasi-optimality of recursive POD representation

The POD representation in general provides the most accurate representation in \(L^2\) norm, for a given number of truncation modes. This is due to the best-approximation property stated in Lemma 0.2. Let us consider a trivariate approximation of T with M modes, of the form

$$\begin{aligned} \hat{T}_{M}(x,y,z)= \sum _{0 \le m \le M} \hat{X}_m(x)\, \hat{Y}_m(y)\, \hat{Z}_m(z), \quad \text{ for } (x,y,z) \in X\times Y \times Z. \end{aligned}$$
(13)

Lemma 0.5

Let \(T \in L^2(X \times Y \times Z)\). It holds

$$\begin{aligned} \Vert T - T_M \Vert _{L^2(X \times Y \times Z)} \le \Vert T - \hat{T}_M \Vert _{L^2(X \times Y \times Z)} , \end{aligned}$$
(14)

where

$$\begin{aligned} T_M(x,y,z)=\sum _{0 \le m \le M } \sigma _m \varphi _m(y,z) \,v_m(x), \end{aligned}$$
(15)

and \(\hat{T}_M\) is any trivariate approximation of T with M modes, of the form (13).

Proof

Let \(V_M\) the space spanned by \(v_1,\ldots ,v_M\) in \(L^2(Y \times Z)\). Observe that \(T_M\) is the orthogonal projection in \(L^2(Y \times Z)\) of T on \(V_M\). Let \(W_M\) be any sub-space of dimension M of \(L^2(Y \times Z)\). Then, due to Lemma 0.2, it holds

$$\begin{aligned} \int _X\Vert (T- T_M)(x)\Vert _{L^2(Y \times Z)} ^2\, dx \le \int _X\Vert (T- S_M)(x)\Vert _{L^2(Y \times Z)} ^2\, dx \end{aligned}$$

for any \(S_M \in W_M\), where we denote \(T(x)(y,z)=T(x,y,z)\), and similarly \(T_M(x)\) and \(S_M(x)\). As the spaces \(L^2(X,L^2(Y \times Z))\) and \(L^2(X \times Y \times Z)\) are isometric, taking \(S_M= \hat{T}_M\), the inequality (14) follows. \(\square \)

Note that in particular this implies that the POD expansion (15) is more accurate than the three-variate PGD one.

The following result states the quasi-optimality of the feasible R-POD with representations.

Lemma 0.6

It holds

$$\begin{aligned} \Vert T - T_\varepsilon \Vert _{L^2(X \times Y \times Z)} \le \Vert T - \hat{T}_M \Vert _{L^2(X \times Y \times Z)} +\varepsilon /\sqrt{2}, \end{aligned}$$
(16)

for any trivariate approximation \(\hat{T}_M\) of T with M modes, of the form (13).

Proof

We have

$$\begin{aligned} \Vert T - T_\varepsilon \Vert _{L^2(X \times Y \times Z)}\le & {} \Vert T - T_M \Vert _{L^2(X \times Y \times Z)} +\Vert T_M - T_\varepsilon \Vert _{L^2(X \times Y \times Z)}\\\le & {} \Vert T - T_M \Vert _{L^2(X \times Y \times Z)}+\left( \sum _{ 0 \le m \le M} |\sigma _m|^2\,\left( \sum _{ k \ge K_m+1}|\sigma _{k}^{(m)}|^2\,\right) \,\right) ^{1/2} \\\le & {} \Vert T - T_M \Vert _{L^2(X \times Y \times Z)}+ \varepsilon /\sqrt{2} \le \Vert T - \hat{T}_M \Vert _{L^2(X \times Y \times Z)}+ \varepsilon /\sqrt{2}, \end{aligned}$$

where the second-to-last estimate is obtained similarly to the proof of Lemma 0.4, and the last one follows from Lemma 0.5. \(\square \)

Then, the feasible R-POD representation is more accurate than \(\hat{T}_M\), for \(\varepsilon \) small enough, if the inequality in (16) is strict. If (16) is an equality, this means that \(\hat{T}_M\) is optimal. In this case the accuracy of the feasible R-POD representation can be made arbitrarily close to the optimal one. It should be noted, however, that the R-POD contains more modes than \(\hat{T}_M\). Anyhow, we present some numerical experiments in “Numerical tests” section that show than the R-POD representation is more accurate than the PGD one, for the same number of modes.

Analysis of solutions of the reaction-diffusion

Let us now consider the homogeneous Dirichlet boundary value problem of the linear reaction-diffusion equation,

$$\begin{aligned} \left\{ \begin{array}{rcll} \partial _t T -\gamma \, \Delta T+\alpha \, T &{}=&{}f &{}\quad \text{ in } \quad \mathcal{Q}, \\ T&{}=&{}0 &{}\quad \text{ in }\, \,(0,b)\times \partial \Omega ,\\ T(x,0)&{}=&{}T_0(x) &{}\quad \text{ in } \,\, \Omega , \end{array} \right. \end{aligned}$$
(17)

where \(\gamma >0\) and \(\alpha \ge 0\) respectively denote the diffusivity and the reaction rate, and \(\mathcal{Q}=\Omega \times (0,b)\). This problem fits into the functional framework of constant-coefficient linear parabolic equations, and admits a unique solution \(T \in L^2((0,b),H^1(\Omega ))\) such that \(\partial _t T \in L^2(\mathcal{Q})\) if \(f \in L^2(\mathcal{Q})\) and \(T_0 \in L^2(\Omega )\). We shall assume that the pair \((\gamma ,\alpha )\) ranges in a set \(\mathcal{G}=[\gamma _m, \gamma _M]\times [\alpha _0,\alpha _M]\) with \(0<\gamma _m <\gamma _M\), \(0\le \alpha _0 \le \alpha _M\). Our purpose in this section is to analyze the rate of convergence of the approximation of T by a recursive POD expansion in separated tensor form:

$$\begin{aligned} T((x,t), (\gamma ,\alpha )) \simeq T_P((x,t), (\gamma ,\alpha ))= \sum _{m =0}^M\sum _{i =0 }^{I} \tau _i^{(m)} \, \varphi _i^{(m)}(\gamma )\, w_i^{(m)}(\alpha )\, v_m(x,t), \end{aligned}$$
(18)

where \(P=(M,I)\), the \(\tau _i^{(m)}\) are real numbers and \(\varphi _i^{(m)} \in L^2(\gamma _m, \gamma _M)\), \( w_i^{(m)} \in L^2(0,\alpha _M)\) and \(v_m \in L^2(\mathcal{Q})\) are eigenmodes. To obtain this expression, let us start from the POD expansion of T where \(\mu =(\gamma ,\alpha )\in \mathcal{G}\) and \(z=(x,t) \in \mathcal{Q}\),

$$\begin{aligned} T((x,t), (\gamma ,\alpha )) = \sum _{m \ge 0} \sigma _m \, \varphi _m(\gamma ,\alpha )\, v_m(x,t), \end{aligned}$$
(19)

where the expansion converges in \(L^2(\mathcal{G}\times \mathcal{Q})\). As \(\varphi _m \in L^2(\mathcal{G})\), it also admits a POD expansion

$$\begin{aligned} \varphi _m(\gamma ,\alpha ) =\sum _{i \ge 0} \sigma _i^{(m)}\, u_i^{(m)}(\gamma ) \,w_i^{(m)}(\alpha ), \end{aligned}$$
(20)

which is convergent in \(L^2(\mathcal{G})\), where \(\{u_i^{(m)}\}_{i \ge 0}\) is an orthonormal basis of \(L^2(\gamma _m, \gamma _M)\) and \(\{w_i^{(m)}\}_{i \ge 0}\) is an orthonormal basis of \(L^2(0,\alpha _M)\). If we truncate the expansion (19) for T to \(M+1\) summands and that (20) for \(\varphi _m\) at \(I+1 \) summands, then we recover the expression for \(T_P\) in (18) where \(\tau _i^{(m)} = \sigma _m \, \sigma _i^{(m)}\).

To analyze the rate of convergence of \(T_P\) towards T, we need some technical tools. Let us consider the orthonormal Fourier basis \(\{e_k\} \) of \( L^2(\Omega )\) formed by the eigenfunctions of the Laplace operator. It holds

$$\begin{aligned} -\Delta e_k = \lambda _k \, e_k \quad \text{ in }\, \Omega , \quad e_k=0 \,\,\, \text{ in }\,\,\, \partial \Omega , \end{aligned}$$
(21)

where \(\lambda _k >0\) is the eigenvalue associated to \(e_k\). The sequence \(\{\lambda _k\}_{k \ge 0}\) is ordered to be non-decreasing with \(\displaystyle \lim _{k \rightarrow \infty }\lambda _k=0\). We decompose \(T_0\) and f as

$$\begin{aligned}&T_0(x)=\sum _{k \ge 0} a_k \, e_k(x), \quad f(x,t)=\sum _{k \ge 0} f_k(t) \, e_k(x), \,\,\, \text{ with }\,\,\, a_k=(T_0,e_k)_{L^2(\Omega )},\,\, f_k(t)=(f(\cdot ,t),e_k)_{L^2(\Omega )}, \end{aligned}$$

where the series are respectively convergent in \(L^2(\Omega )\) and \(L^2(\mathcal{Q})\), and

$$\begin{aligned} \Vert T_0\Vert _{L^2(\Omega )}^2 = \sum _{k\ge 0} |a_k|^2,\quad \Vert f\Vert _{L^2(\mathcal{Q})}^2 = \sum _{k\ge 0} \Vert f_k\Vert _{L^2(0,b)}^2 . \end{aligned}$$
(22)

The solution of the reaction-diffusion equations is then expanded in terms of the eigenfunctions \(e_k\),

$$\begin{aligned} T((x,t), (\gamma ,\alpha ))= \sum _{k \ge 0} \theta _k(t,(\gamma ,\alpha ))\, e_k(x), \end{aligned}$$
(23)

where the coefficients \(\theta _k\) are defined by

$$\begin{aligned} \theta _k(t, (\gamma ,\alpha )) = a_k \, e^{-(\gamma \, \lambda _k+\alpha ) \, t} + \int _0^t f_k(s)\, e^{-(\gamma \lambda _k+\alpha )(t-s)}\, ds. \end{aligned}$$

We shall consider T as a mapping from \(\mathcal G\) into \(L^2(\mathcal{Q})\) that brings a couple \((\gamma , \alpha ) \in \mathcal{G}\) into the function \(T((\cdot ,\cdot ), (\gamma ,\alpha )) \in L^2(\mathcal{Q})\), that we denote \(T_{(\gamma ,\alpha )}\).

Our main result is the following.

Theorem 0.7

The truncated POD series expansion \(T_P\) given by (18) satisfies the error estimate

$$\begin{aligned} \Vert T-T_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \le C_{\rho }\, ( \rho ^{- M}+ \sqrt{M}\,\rho ^{-I}) , \end{aligned}$$
(24)

for any \(1< \rho <\rho _*\), where \(C_{\rho }>0\) is a constant depending on \(\rho \), unbounded as \(\rho \rightarrow 1\), and \(\rho _*=(\sqrt{\gamma _M}+\sqrt{\gamma _m})/(\sqrt{\gamma _M}-\sqrt{\gamma _m})\).

Therefore, the recursive POD expansion converges with spectral accuracy in terms of the number of truncation modes in the main and secondary expansions.

The proof of this result is essentially based upon the analyticity of T with respect to diffusivity \(\gamma \) and reaction rate \(\alpha \). It is rather technical, and will come up after several lemmas, the first of which is

Lemma 0.8

The mapping \((\gamma , \alpha ) \in \mathcal{G}\mapsto T_{(\gamma ,\alpha )}\in L^2(\mathcal{Q})\) is analytic.

Proof

According to (23), T is the sum of two contributions, coming from the initial condition \(T_0\) and the source f. We prove the analyticity for each of them.

i.—Let us consider the part generated by the initial condition, corresponding to

$$\begin{aligned} \theta _k(t, (\gamma ,\alpha )) = a_k \, e^{-(\gamma \, \lambda _k+\alpha ) \, t}. \end{aligned}$$

Let us bound the residual

$$\begin{aligned}&\begin{aligned} \sup _{\gamma \ge \epsilon , \alpha \ge 0 } \; \Big \Vert \sum _{k \ge L} \theta _k(t, (\gamma ,\alpha )) \, e_k(x) \Big \Vert _{L^2(\mathcal Q)}^2&= \sup _{\gamma \ge \epsilon , \alpha \ge 0 } \; \sum _{k \ge L} (a_k)^2 \int _0^b e^{-2(\gamma \lambda _k +\alpha )t}\; dt \\&=\sup _{\gamma \ge \epsilon , \alpha \ge 0 } \;\sum _{k \ge L} (a_k)^2 \frac{1-e^{-2 (\gamma \lambda _k+\alpha ) b}}{2(\gamma \lambda _k+\alpha )} \le \frac{1}{2\epsilon \lambda _0} \sum _{k \ge L} (a_k)^2 , \end{aligned} \end{aligned}$$

for any \(L>0\). Then the series uniformly converges on each set \([\epsilon ,+\infty [\times [0,+\infty [\), for all \(\epsilon >0\). As each term in the series (23) determines an analytic function from \(\mathcal G\) into \(L^2(\mathcal Q)\), then the limit is analytic from \((0,+\infty ) \times (0,+\infty )\) into \(L^2(\mathcal Q)\).

ii.—Let us now investigate the part arisen from the source f, corresponding to

$$\begin{aligned} \theta _k(t, (\gamma ,\alpha )) = \int _0^t f_k(s)\, e^{-(\gamma \lambda _k+\alpha )(t-s)}\, ds. \end{aligned}$$
(25)

This needs the preliminary statement.

Let \(g \in L^2(0,b)\) and \(\lambda >0\), \(\alpha \ge 0\) be given, the function

$$\begin{aligned} G: (\gamma ,\alpha ) \mapsto \int _0^t g(s)\, e^{-(\gamma \lambda +\alpha )(t-s)}\, ds, \end{aligned}$$

mapping \(]0,+\infty [ \times ]0,+\infty [\) into \(L^2(0,b)\), is analytic.

To prove it, we show that \((\gamma ,\alpha ) \mapsto G(\gamma ,\alpha )\) is locally expressed as a convergent power series. Let \(\gamma _0>0\), \(\alpha _0 >0\) be fixed. On account of the analyticity of the exponential we derive that

This series is absolutely convergent in \(L^2(0,b)\). Indeed, the integral term being a convolution, then Young’s inequality can be used which implies that

$$\begin{aligned}&\sum _{n\ge 0} \frac{[\lambda (\gamma -\gamma _0)+(\alpha -\alpha _0)]^n}{n!} \Vert G_n\Vert _{L^2(0,b)}\\ {}&\quad \le \sum _{n\ge 0} \frac{[\lambda (\gamma -\gamma _0)+(\alpha -\alpha _0)]^n}{n!} \Vert g\Vert _{L^2(0,b)} \Vert (- t)^n e^{-(\gamma _0\lambda +\alpha _0)t}\Vert _{L^1(0,\infty )} \\&\quad =\Vert g\Vert _{L^2(0,b)} \sum _{n\ge 0} \frac{[\lambda (\gamma -\gamma _0)+(\alpha -\alpha _0)]^n}{(\lambda \gamma _0+\alpha _0)^n} . \end{aligned}$$

The geometrical series is convergent for \((\lambda ,\alpha )\) such that \(|(\lambda \gamma +\alpha )-(\lambda \gamma _0+\alpha _0)| < \eta \) provided that \(\eta <\lambda \gamma _0+\alpha _0\). Then, the function \( G: \,]0,+\infty [ \times ]0,+\infty [\mapsto L^2(0,b)\) is analytic.

To finish the proof, let us check out that the series (23) with \(\theta _k\) given by (25) is uniformly convergent in \([\epsilon ,+\infty [\times [0,+\infty [\), for all \(\epsilon >0\). For a given L we have

$$\begin{aligned} \sup _{\gamma \ge \epsilon ,\alpha \ge 0 } \; \Big \Vert \sum _{k\ge L} \theta _k(t, (\gamma ,\alpha )) \, e_k(x) \Big \Vert _{L^2(\mathcal Q)}^2= & {} \sup _{\gamma \ge \epsilon ,\alpha \ge 0 }\;\sum _{k\ge L} \Vert \theta _k(t, (\gamma ,\alpha )) \Vert _{L^2(0,b)}^2 \\\le & {} \sup _{\gamma \ge \epsilon ,\alpha \ge 0 } \;\sum _{k\ge L} \Vert f_k\Vert _{L^2(0,b)}^2 \Vert e^{-(\gamma \lambda _k+\alpha ) t}\Vert _{L^1(0,\infty )}^2\\\le & {} \frac{1}{(\epsilon \lambda _0)^2}\sum _{k\ge L} \Vert f_k\Vert _{L^2(0,b)}^2. \end{aligned}$$

Then the series (23) of analytic functions is uniformly convergent. As a result, the limit is also analytic. The proof is complete. \(\square \)

Another preliminary tool required in our study is related to the polynomial approximation of regular vector-valued functions. We shall adapt a result by S. Bernstein (in 1912), stated for complex-valued functions, and improved since then in many works (see for instance [19]). For some \(\rho >1\), let the set \({ E}_\rho \) in the complex plan be defined as

$$\begin{aligned} E_\rho =\big \{ \zeta \in \mathbf {C};\quad |\zeta -1| + |\zeta +1| \le \rho +\rho ^{-1} \,\big \}. \end{aligned}$$

Consider a function \(F:{ E}_\rho \rightarrow H \) where H is a Hilbert space. For a given integer number \(M \ge 0\) let \(F_M\) be the truncated Chebyshev polynomial series expansion of F of degree M with coefficients in H. The shape of the polynomial \(F_M\) will be fixed later on (see Remark 0.2). Following the proof as exposed in [19], we come up with

Lemma 0.9

Assume that F is analytic and bounded in \(E_\rho \). There holds that

$$\begin{aligned} \max _{\xi \in [-1,1]}\Vert F(\xi ) - F_M(\xi )\Vert _H \le {C}_{\rho }\, \rho ^{-M}. \end{aligned}$$

Remark 0.1

The constant in the lemma may be fixed to (see [25, Theorem 8.2])

$$\begin{aligned} C_\rho = \frac{2 }{\rho -1}\Vert F\Vert _{L^\infty ( E_\rho ,H)}, \end{aligned}$$

that blows up as \(\rho \) goes to unity.

We now need to derive similar approximation estimates for analytic vector valued functions defined from \(\mathcal G\) into \(L^2(\mathcal Q)\). The following result holds

Lemma 0.10

For any \(\alpha \in [0,\alpha _M]\) there exists a polynomial \(S_M^{(\alpha )}\) ranging from \([\gamma _0,\gamma _M]\) into \(L^2(\mathcal Q)\), with degree \(\le M\), such that for all \(\rho \, (1<\rho < \rho _*)\),

$$\begin{aligned} \max _{(\gamma ,\alpha ) \in \mathcal{G}}\Vert T (\gamma ,\alpha ) - S_M^{(\alpha )}(\gamma )\Vert _{L^2(\mathcal Q)} \le \hat{C}_\rho \, \rho ^{-M}, \end{aligned}$$

where \(\hat{C}_\rho \) is a non-negative constant, possibly unbounded as \(\rho \rightarrow 1\).

Proof

We only give a sketch of the proof. Following the result by Lemma 0.8, for any given \(\alpha \ge 0\), the vector-valued function \(\gamma \in {\mathbf {C}} \mapsto T(\gamma ,\alpha )\) is analytic in \(\mathcal{R}e \gamma >0\). This implies that provided that \(\rho <\rho _*\), the ellipse

$$\begin{aligned} \mathcal{E}_\rho =\Big \{ \zeta \in \mathbf {C}; \quad |\zeta - \gamma _M| + |\zeta -\gamma _m| \le \frac{\gamma _M-\gamma _m}{2} (\rho +\rho ^{-1}) \,\Big \} \end{aligned}$$

is included in the analyticity set of T. Consider thus the coordinates transformation

$$\begin{aligned} \zeta = \tau (\hat{\zeta }):= \frac{\gamma _M-\gamma _m}{2} \hat{\zeta }+ \frac{\gamma _M+\gamma _m}{2}, \qquad \hat{\zeta }\in E_\rho . \end{aligned}$$

It is affine and bijective from \( E_\rho \) into \( \mathcal E_\rho \) and transforms the reference interval \([-1,1]\) into \(G=[\gamma _m, \gamma _M]\). This transformation makes it possible to construct such a polynomial \(S_M^{(\alpha )}\). In fact, we start by constructing the truncated Chebyshev series expansion \(\hat{S}_M^{(\alpha )}(\hat{\zeta })\) of the (transformed) function \(\hat{T}^{(\alpha )} (\hat{\zeta }) = T (\zeta ,\alpha )\). Then, back to the interval \([\gamma _m,\gamma _M]\), we set \(S_M^{(\alpha )}(\zeta ) = \hat{S}_M^{(\alpha )}(\hat{\zeta })\).

To obtain the error estimate, from Lemma 0.9 we obtain

$$\begin{aligned} \max _{\gamma \in G}\Vert T (\gamma ,\alpha ) - S_M^{(\alpha )}(\gamma )\Vert _{L^2(\mathcal Q)} \le \frac{2 }{\rho -1}\Vert T (\cdot ,\alpha )\Vert _{L^\infty ( \mathcal{E}_\rho ,L^2(\mathcal Q))} \, \rho ^{-M}\le \frac{2K }{\rho -1} \, \rho ^{-M}, \end{aligned}$$

where

$$\begin{aligned} K= \sup _{\alpha \in [0,\alpha _M]} \Vert T (\cdot ,\alpha )\Vert _{L^\infty ( \mathcal{E}_\rho ,L^2(\mathcal Q))} , \end{aligned}$$

which is finite and independent of \(\alpha \in [0,\alpha _M]\) due to the uniform boundedness of \(T(\gamma ,\alpha )\) in compact sets of \((0,+\infty ) \times (0,+\infty )\). The proof is complete. \(\square \)

Remark 0.2

The polynomial \(S_M^{(\alpha )}\) may be put under the form

$$\begin{aligned} S_M^{(\alpha )}(\gamma ) =\sum _{0\le m\le M} w_m^{(\alpha )}\, U_m(\gamma ) , \qquad \forall \gamma \in G, \end{aligned}$$

where \(U_m\) stands for the polynomial obtained by transporting the Chebyshev polynomial of degree m defined in \([-1,1]\) to the interval G, and the coefficients \((w_m^{(\alpha )})_{0\le m\le M}\) belong to \(L^2(\mathcal Q)\).

Proof of Theorem 0.7

Let us consider the truncated primary expansion

$$\begin{aligned} T_M((x,t), (\gamma ,\alpha )) = \sum _{m =0}^M \sigma _m \, \varphi _m(\gamma ,\alpha )\, v_m(x,t), \end{aligned}$$

for some integer \(M\ge 0\). Let \(S_M\) be the vector-valued polynomial (considered as a function of \((\gamma ,\alpha )\)) constructed in Lemma 0.10. In view of Lemma 0.2 and Remark 0.2, the following identity holds,

$$\begin{aligned} \Vert T-T_M\Vert _{L^2(\mathcal{G}\times \mathcal Q)} \le \Vert T-S_M\Vert _{L^2(\mathcal{G}\times \mathcal Q)} \le |\mathcal{G}|^{1/2} \, \max _{(\gamma ,\alpha )\in \mathcal{G}} \Vert T(\gamma ,\alpha )-S_M^{(\alpha )}(\gamma )\Vert _{L^2(\mathcal Q)}. \end{aligned}$$

Applying the result stated in Lemma 0.10 it follows that

$$\begin{aligned} \Vert T-T_M\Vert _{L^2(\mathcal{G}\times \mathcal Q)} \le \hat{C}_\rho \, \rho ^{-M}. \end{aligned}$$
(26)

Next, observe that as the sequence \((v_m)_{m \ge 0}\) is orthonormal in \(L^2(\mathcal{Q})\), then

$$\begin{aligned} \Vert T_M - T_P\Vert _{L^2(\mathcal{G}\times \mathcal Q)}^2 \le \sum _{m=0}^M \sigma _m^2 \, \Vert \varphi _m - \varphi _m^{(I)}\Vert _{L^2(\mathcal{G})}^2, \end{aligned}$$
(27)

where

$$\begin{aligned} \varphi _m^{(I)} (\gamma ,\alpha )= \sum _{i=1}^I \sigma _i^{(m)}\, u_i^{(m)}(\gamma )\, w_i^{(m)}(\alpha ) \end{aligned}$$

is the truncated POD expansion of \(\varphi _m\) to \(I+1\) terms. Also, by (2),

$$\begin{aligned} \varphi _m(\gamma ,\alpha )=\frac{1}{\sigma _m}\,\int _\mathcal{Q} T((x,t),(\gamma ,\alpha )) \, v_m(x,t)\, dx\, dt. \end{aligned}$$
(28)

Then \(\varphi _m\) is an analytic function from \((0,+\infty )\times (0,+\infty )\) into \({\mathbb R}\). By an argument similar to that of Lemma 0.10, we prove that for any \(\alpha \in [0,\alpha _1]\) there exists a polynomial in \(\gamma \), \(r_{I,\alpha }^{(m)}(\gamma )\), of degree least or equal than I such that

$$\begin{aligned} \max _{\gamma \in [\gamma _m,\gamma _M]} | \varphi _m (\gamma ,\alpha )-r_{I,\alpha }^{(m)}(\gamma )| \le \frac{2}{1-\rho } \, \Vert \varphi _m(\cdot ,\alpha )\Vert _{L^\infty (\mathcal{E_\rho })}\, \rho ^{-I}. \end{aligned}$$
(29)

From (28), we deduce \( \sigma _m \,|\varphi _m(\gamma ,\alpha )| \le \Vert T(\gamma ,\alpha )\Vert _{L^2(\mathcal{Q})} \) for all \((\gamma ,\alpha ) \in \mathcal{G}\), and then

$$\begin{aligned} \sigma _m \, \max _{(\gamma ,\alpha ) \in \mathcal{G}} | \varphi _m (\gamma ,\alpha )-r_{I,\alpha }^{(m)}(\gamma )| \le \frac{K}{1-\rho } \, \rho ^{-I}, \end{aligned}$$

for some constant \(K \ge 0\) independent of m and \(\alpha \). Consequently, in view of Lemma 0.2,

$$\begin{aligned} \sigma _m^2 \, \Vert \varphi _m - \varphi _m^{(I)}\Vert _{L^2(\mathcal{G})}^2 \le \sigma _m^2 \, \Vert \varphi _m - r_I^{(m)}\Vert _{L^2(\mathcal{G})}^2\le |\mathcal{G}|\,\sigma _m^2 \, \Vert \varphi _m - r_I^{(m)}\Vert _{L^\infty (\mathcal{G})}^2 \le \overline{C}_\rho ^2\, \rho ^{-2I},\nonumber \\ \end{aligned}$$
(30)

where \(r_I^{(m)}(\gamma ,\alpha )= r_{I,\alpha }^{(m)}(\gamma )\) and \(\displaystyle \overline{C}_\rho = |\mathcal{G}|^{1/2}\,\frac{K}{1-\rho }\). From (27) we deduce that

$$\begin{aligned} \Vert T_M - T_P\Vert _{L^2(\mathcal{G}\times \mathcal Q)} \le \overline{C}_\rho \,\sqrt{M} \, \rho ^{-I}. \end{aligned}$$

Combining this estimate with (26) completes the proof. \(\square \)

Remark 0.3

  • The constant \(C_\rho \) in estimate (24) also depends on the parameters domain \(\mathcal G\). We do not make explicit this dependence to simplify the notation.

  • The limit value for the convergence rates \(\rho _*\) only depends on the ratio \(\gamma _M/\gamma _m\), as

    $$\begin{aligned} \rho _*= \frac{2}{\sqrt{\frac{\gamma _M}{\gamma _m}} -1}+1. \end{aligned}$$
  • In view of estimate (24), in general a quasi-optimal choice for I is \(I=M+\displaystyle \frac{1}{2}\log M\) (actually, the closest integer to this number). In this case,

    $$\begin{aligned} \Vert T-T_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \le C_\rho \, \rho ^{-M}. \end{aligned}$$

    We thus obtain the same asymptotic convergence order when \(M \rightarrow \infty \) as for \(\Vert T-T_M\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \).

  • For more general parameter-depending parabolic equations, the above technique applies if the elliptic operator is symmetric. This allows to diagonalize the problem and expand the solution as a series in terms of the eigenfunctions of the elliptic operator. The use of Courant–Fischer–Weyl Theorem [19] allows to reduce the estimate of the truncation error of the POD expansion to the estimate of the interpolation error of the solution with respect to one of the parameters, eventually by polynomial functions. Then the convergence rate of the POD expansion will depend on the smoothness of the solution with respect to the parameters of the problem.

Reordering of recursive POD expansion

A practical way to re-order expansion (18) is in decreasing order of the values \( \tau _m^{(i)}=\sigma _m \,\sigma _m^{(i)}. \) This leads to an expansion of the form

$$\begin{aligned} T_P((x,t), (\gamma ,\alpha ))=\sum _{l=0}^L \tilde{\tau }_l \, \tilde{\varphi }_l(\gamma )\, \tilde{w}_l(\alpha )\, \tilde{v}_l(x,t),\quad L=(M+1)(I+1), \end{aligned}$$
(31)

where the sets \(\{\tau _m^{(i)}\,,\, m=0,\ldots ,M,\, i=1,\ldots ,I\}\) and \(\{\tilde{\tau }_l\,,\, l=0,\ldots ,L\}\) coincide, and \(\tilde{\tau }_0 \ge \tilde{\tau }_1 \ge \cdots \ge \tilde{\tau }_L\).

To analyze the rate of convergence of this rearrangement of the RPOD expansion, let us at first remark that Theorem 0.7 allows, as a by-product, to estimate the singular values \(\sigma _m\) and \(\sigma _m^{(i)}\). Indeed, denote by \({\mathcal L (E,F)}\) the set of linear bounded mappings from a Banach space E into a Banach space F, the following bound holds,

$$\begin{aligned} \sigma _{M+1} = \min _{B_M \in {\mathcal L (L^2(\mathcal G),L^2(\mathcal Q)}), \mathrm{rank } B_M \le M} \Vert B -B_M\Vert _{{\mathcal L (L^2(\mathcal G),L^2(\mathcal Q)})}, \end{aligned}$$
(32)

where

$$\begin{aligned} (B \varphi ) (z) = \int _G T(\gamma ,z)\varphi (\gamma )\; d\gamma . \, \forall z\in \mathcal Q. \end{aligned}$$

Consider the operator

$$\begin{aligned} (\tilde{B}_{M} \varphi ) (z) = \int _G T_M(\gamma ,z)\varphi (\gamma )\; d\gamma ,\, \forall z\in \mathcal Q. \end{aligned}$$
(33)

Then by estimate (26)

$$\begin{aligned} \sigma _{M+1} \le \Vert B-\tilde{B}_{M}\Vert _{ \mathcal{L}} = \Vert T-T_{M}\Vert _{ L^2(G\times \mathcal Q)} \le \hat{C}_\rho \, \rho ^{-M}. \end{aligned}$$
(34)

Similarly,

$$\begin{aligned} \sigma _{i+1}^{(m)} \le \min _{E_{I}^{(m)} \in \mathcal L, \mathrm{rank } E_{I}^{(m)} \le I}\Vert E^{(m)}- E_{I}^{(m)}\Vert _{ \mathcal{L}(L^2(G),L^2([0,\alpha _1])}, \end{aligned}$$

where \(\displaystyle (E^{(m)} u) (\alpha ) = \int _G \varphi ^{(m)}(\gamma ,\alpha ) \, u(\gamma )\; d\gamma , \quad \forall \alpha \in [0,\alpha _1]. \) Let us assume that the \(\varphi ^{(m)}\) satisfy the additional (slightly) stronger boundedness property

$$\begin{aligned} \sup _{\alpha \in [0,\alpha _1],\, m =0,1,\ldots } \Vert \varphi _m(\cdot ,\alpha )\Vert _{L^\infty (\mathcal{E_\rho })} <+\infty . \end{aligned}$$
(35)

Then, in view of estimate (29),

$$\begin{aligned} \sigma _{I+1}^{(m)} \le \Vert E^{(m)}-\tilde{E}_{I}^{(m)}\Vert _{ \mathcal{L}(L^2(G),L^2([0,\alpha _1])} = \Vert \varphi ^{(m)}-r_{I}^{(m)}\Vert _{ L^2(\mathcal G)} \le \hat{C}_\rho ^{(m)}\, \rho ^{-I}, \end{aligned}$$
(36)

where \(\displaystyle (\tilde{E}_{I}^{(m)} u) (\alpha ) = \int _G r_{I}^{(m)}(\gamma ,\alpha ) \, u(\gamma )\; d\gamma , \quad \forall \alpha \in [0,\alpha _1]. \)

Then the error associated to this reordering, for large M and I, is estimated by

$$\begin{aligned} \Vert T-T_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \le D_{{\rho }}\, L^{1/4}\, {\rho }^{ -2\,\sqrt{L}} \end{aligned}$$
(37)

for any \(1< {\rho } < \rho _*\), where \(D_\rho \) is a constant, possibly unbounded as \({\rho } \rightarrow 1\). To justify it, let us write \(T_P\) as

$$\begin{aligned} T_P((x,t), (\gamma ,\alpha ))=\sum _{k=0}^K \sum _{m+i=k} \tau _i^{(m)} \, \varphi _i^{(m)}(\gamma )\, w_i^{(m)}(\alpha )\, v_m(x,t), \end{aligned}$$

where for simplicity we assume that Mand I are such that \(L=(K+1)(K+2)/2 \) for some integer \(K\ge 0\). For other values there will appear a residual corresponding to high order modes that will be asymptotically negligible, as it is of larger order with respect to \(\rho \). If estimates (34) and (36) are sharp, it holds

$$\begin{aligned} \tau _i^{(m)}\simeq A_\rho \,\rho ^{-(m+i)} \end{aligned}$$
(38)

for some constant \(A_\rho \). Then, \(\tau _i^{(m)} < \tau _j^{(n)}\) if \(i+m > j+n\) and consequently the set \(\{\tilde{\tau }_l, \,(k(k+1)/2)-1\le l \le (k+1)(k+2)/2\,\} \) coincides with the set \(\{\tau _i^{(m)},\, m+i=k\,\}\). Then, due to estimate (38),

$$\begin{aligned} \Vert T-T_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})}^2\le & {} \sum _{k \ge K+1}\sum _{m+i= k} |\tau _i^{(m)}|^2 \le A_\rho \, \sum _{k \ge K+1} (k+1) \, \rho ^{-2k}. \end{aligned}$$
(39)

As \(\displaystyle \sum _{k \ge K+1} (k+1) \, \rho ^{-2k}\simeq (K+1) \, \rho ^{-2(K+1)}\), \(K \simeq \sqrt{2L}\), then (37) follows.

Practical implementation

Assume again that estimate (36) is sharp. Then \(\displaystyle \sum _{i \ge I+1} |\sigma _i^{(m)}|^2 \simeq C_m' \, \rho ^{-2I} \simeq |\sigma _{I+1}^{(m)}|^2\). Thus, we may set the estimator \(\beta _I^{(m)}=\sigma _{I+1}^{(m)}\), and similarly \(\alpha _M = \sigma _{M+1}\), in (10). This suggests to consider a different number of summands in the secondary expansions of (18), what leads to an expansion as in (8),

$$\begin{aligned} T_P((x,t), (\gamma ,\alpha ))= \sum _{m =0}^M\sum _{i =0 }^{I_m} \tau _i^{(m)} \, \varphi _i^{(m)}(\gamma )\, w_i^{(m)}(\alpha )\, v_m(x,t), \end{aligned}$$
(40)

where M and \(I_m\) are determined to fit the error tolerance tests \(\sigma _M \le A\, \varepsilon \) and \(\sigma _{I_m}^{(m)} \le B \, \varepsilon \) where A and B are given in (11). In practice for simplicity these may be replaced by \(\sigma _{M+1} \le \varepsilon \) and \(\sigma _{I_{m}+1}^{(m)} \le \varepsilon \).

Also, in view of (38) and (39), we deduce that a good estimator the error \(\Vert T-T_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})}\) is \(\tau _I^{(M)}\), associated to the last computed mode, such that \(I+M=K\).

Numerical tests

This section is devoted to the comparison of the practical performances of the feasible R-POD expansion. In particular, we confirm the exponential rate of convergence of the truncated POD expansion for the diffusion-reaction equation proved in section “Analysis of solutions of the reaction-diffusion”. We are also interested in comparing the rate of convergence of R-POD and PGD expansions, as the latter is particularly well suited to approximate multivariate functions. We have considered functions with high and low smoothness, as the smoothness plays a crucial role in the decreasing of the size of the modes in both expansions. In addition we have tested the ability of both representations to approximate functions that already have a separated tensor structure. For completeness we describe in the Appendix the application of the PGD expansion to approximate multivariate functions.

Multi-variate functions

In this test we apply the R-POD and the PGD to approximate multivariate functions. Actually we consider tri-variate functions a generic test to determine the relative performances of both expansions. We have considered the following tests:

Case 1: Function with tensor structure.

$$\begin{aligned} S_1(x,y,z)=x+y+z. \end{aligned}$$
(41)

Case 2: Function with non tensor structure.

$$\begin{aligned} S_2(x,y,z)=\sin (xyz). \end{aligned}$$
(42)

Case 3: Function with low regularity

$$\begin{aligned} S_3(x,y,z)=\sqrt{x+2y+z+4}. \end{aligned}$$
(43)
Fig. 1
figure 1

Comparison of errors for feasible R-POD and PGD. Function \(S_1(x,y,z)=x+y+z\)

Fig. 2
figure 2

Comparison of errors for feasible R-POD and PGD. Function \(S_2(x,y,z)=\sin (xyz)\)

Fig. 3
figure 3

Comparison of errors for feasible R-POD and PGD. Function \(S_3(x,y,z)=\sqrt{x+2y+z+4}\)

The space domain is fixed to \(\Omega =X \times Y \times Z\), with \(X=Y=Z=]-1,1[\) and Gauss–Lobatto–Legendre quadrature is used (see [4]) with the polynomial degree equal to \(N=64\). These formulas are used to evaluate the matrix representation of the operators B and A.

We set the tolerance error in \(L^2(X \times Y \times Z)\) in the residual of both RPOD and PGD expansions to \(\mu = 10^{-7}\). This corresponds to \(\varepsilon =10^{-14}\) in Algorithm FR-RPOD. We have displayed in Figs. 1, 2, 3 the comparison of the convergence history of the feasible R-POD and PGD processes, for all the three-variate functions considered. The x-axis represents the number of eigen-modes while the y-axis represents the \(L^2(X \times Y \times Z)\) error, in logarithmic coordinates. We observe in Fig. 1 that the R-POD just needs 3 modes to fit a function that already has a separated tensor structure, while the PGD requires 17 modes to reach the error level. Further, that for functions with low smoothness both expansions require approximately the same number of modes to reach a moderate accuracy, however the R-POD is more efficient to reach high accuracy in all cases. Finally, that the error associated to the R-POD expansions is almost in all cases below the one associated to the PGD one for the same number of modes.

Table 1 displays the number of modes required by each expansion to reach the error below the threshold \(\mu = 10^{-7}\). We observe that both expansions appear to converge for all cases considered, although in general both require a larger number of modes to approximate functions with lower smoothness. Also, that in all cases considered the R-POD requires less modes than the PGD.

Table 1 Comparison of feasible R-POD and PGD for trivariate functions

Reaction-Diffusion equation

This part is devoted to determining the effective convergence rate of the R-POD approximation of some solutions to the transient reaction-diffusion equation when parameterized by the diffusivity and reaction coefficients. We assess the exponential convergence rate and investigate the variation of this rate with respect to the set \(\mathcal{G}=[\gamma _m,\gamma _M]\times [\alpha _m,\alpha _M]\).

Test 1: Exponential convergence rate.

We consider the time-dependent reaction-diffusion equation in the domain \(\mathcal Q=(0,1)\times (0,1)\) and we select three possible pairs of source terms and initial conditions, given by

$$\begin{aligned} {\mathbf {Data}}\,{\mathbf {1}}{:}\qquad f(t,x)= & {} \sqrt{|x-t-0.3|}, \qquad \quad T_0(x) =0, \\ {\mathbf {Data}}\,{\mathbf {2}}{:}\qquad f(t,x)= & {} 0,\qquad \qquad \qquad \qquad \quad T_0(x) = |x-0.4|,\\ {\mathbf {Data}}\,{\mathbf {3}}{:}\qquad f(t,x)= & {} \sqrt{|x-t-0.3|},\quad \qquad T_0(x) = |x-0.4|. \end{aligned}$$

These data have mild singularities, so the temperature solutions of (17), have a reduced regularity with respect to x and t, in particular for \(t=0\) for the two last data. The heat problem is discretized by an Euler scheme/Gauss–Lobatto–Legendre spectral method see [4] (the time step is \(\delta t = 10^{-2}\) and the polynomial degree is \(N=64\)).

Calculation for the matrix representations of the operators B and A are realized by means of accurate quadrature formulas. Indeed, various integrals (with respect to either \(\gamma \), \(\alpha \) or (tx)) are computed using Gauss-Lobatto quadrature formulas with high resolution in the corresponding intervals.

Fig. 4
figure 4

Convergence history for POD expansion of the solution of the reaction-diffusion equation. Data 1 (top left), Data 2 (top right) and Data 3 (bottom)

Figure 4 shows the convergence history of the R-POD expansion for the reaction-diffusion equation (40) in terms of the total number of modes in the expansion. We have considered the sets of diffusivities \(\gamma \in [1,51]\), and reaction rates \(\alpha \in [0,100]\). The error is measured in \(L^2(\mathcal{Q})\) norm. The numbers of secondary modes \(I_m\) has been determined to fit the test \(\sigma _m^{(I_m+1)} \le \varepsilon =10^{-10}\). In practice a small amount of secondary modes (actually, \(I_m \simeq 4\)) is needed to fit this test. The modes have been re-arranged in decreasing order of the effective singular values \(\tau _i^{(m)}=\sigma _m \, \sigma _i^{(m)}\) (denoted by a \(\CIRCLE \) symbol). We observe that the \(\tau _i^{(m)}\) indeed are good error estimators for this re-arranged expansion, as was argued in “Practical implementation” section.

To assess the regularity of the eigenmodes associated to the conductivity parameter \(\gamma \) we choose to plot the three first corresponding to the most important singular values. The computational is made for case of Data 3. Based on Fig. 5 we clearly observe that these functions are regular. Same observation is made for the reaction parameter

Fig. 5
figure 5

Three first eigenmodes associated to the conductivity parameter (\(\gamma \))

Test 2: Dependence of the convergence rate with respect to the parameters range.

The dependence with respect to the ratio of diffusivities \(R=\gamma _M/\gamma _m\) of the exponential convergence rate, stated by Theorem 0.7, is illustrated in Fig. 6. We depict the convergence history for Data 3, computed for \(R=25, 64\) and 400, in all cases with a fixed interval of reaction rates \([\alpha _m,\alpha _M]=[0,100]\), with respect to the square root of the numbers of modes, \(M=\sqrt{L}\). We can point out that the convergence rate degrades as R increases, in accordance with the fact that

$$\begin{aligned} \rho _*= \frac{2}{\sqrt{\frac{\gamma _M}{\gamma _m}} -1}+1. \end{aligned}$$

We observe some gap between the purely exponential decay of the error and the computed one, as the error curve in logarithmic coordinates appears to be a slightly concave curve instead of a straight line. This is consistent with the presence of the factor \(L^{1/4}\) in estimate (37).

Fig. 6
figure 6

Variation of the R-POD errors (in logarithmic scale) with respect to the ratio \(R=\gamma _M/\gamma _m\), for fixed \(\alpha _m=0\), \(\alpha _M=100\). The variable M stands for the square root of the number of modes

In Table 2, we present the computed exponential convergence rate \(\alpha _c=2 \,\log \rho _c\), so that the \(L^2(\mathcal{G}\times \mathcal{Q})\) error, in terms of the number of modes after rearranging the RPOD series, is assumed to satisfy

$$\begin{aligned} e(L) =C \, e^{-\alpha _c \, \sqrt{L}}, \end{aligned}$$

and the theoretical one given by \(\alpha ^*=2\,\log \rho _*\). The value \(\alpha _c\) is calculated by exponential regression. We indeed recover an exponential rate of convergence with respect to the square root of the number of modes, with an effective convergence rate larger than the theoretical one. We numerically state that the computed rate in all cases is larger than one (see Table 2). We thus observe a kind of super-convergence effect.

Table 2 Computed and theoretical convergence rates, for different values of \(R=\gamma _M/\gamma _m\) and fixed \(\alpha _m=0\), \(\alpha _M=100\) (for Data 3)

We next test the dependence of the convergence rate with respect to the interval of reaction rates \([\alpha _0,\alpha _M]\). We show in Fig. 7 the convergence rates history corresponding to \(\alpha _m=0\), \(\alpha _M= 10,100,500,1000\) for fixed \(\gamma _m=1\), \(\gamma _M=51\). We observe a decrease of the rate as \(\alpha _M\) increases, that however appears to be uniformly bounded, in agreement with estimate (37), where the dependence of the error bound with respect to \([\alpha _0,\alpha _M]\) only appears through the coefficient \(D_\rho \).

Fig. 7
figure 7

Variation of the R-POD errors with respect to the reaction rate \(\alpha \). The curves correspond to \(\alpha _0=0\), \(\alpha _M=10\), 100, 500, 1000, with \(\gamma _m=1\), \(\gamma _M=51\) in all cases. The variable M stands for the square root of the number of modes

The last numerical experiment studies wether the dependence of the exponential convergence on the diffusivities range \([\gamma _m,\gamma _M]\) indeed takes place in terms of the ratio \(R=\gamma _M/\gamma _m\). This is confirmed by the result plot in Fig. 8, where we consider the couples \((\gamma _m,\gamma _M)=(1,2)\) and (4, 8), corresponding to \(R=2\), and \((\gamma _m,\gamma _M)=(1,25)\) and (4, 100), corresponding to \(R=25\), with fixed \(\alpha _0=0\), \(\alpha _M=100\).

Fig. 8
figure 8

Analysis of dependence of the R-POD errors with respect to the ratio \(R=\gamma _M/\gamma _m\). The curves correspond to the indicated pairs \((\gamma _m,\gamma _M)\). The variable M stands for the square root of the number of modes

Conclusion

We have introduced in this paper a recursive POD (RPOD) expansion to approximate multivariate functions. The approach consists in building truncated recursive POD expansions of the modes that appear in the expansions at the previous level, to a given tolerance error. We have constructed a practical truncation error estimator by means of bounds for the singular values, which is used to recursively compute the expansion by the Power Iterate (PI) method. This allows to compute just the modes needed to attempt a given error threshold. We have proved the quasi-optimality of this RPOD expansion in \(L^2\), similar to that of the POD expansion.

We have proved the exponential rate of convergence of the RPOD expansion for the solution of the reaction-diffusion equation, based upon the analyticity of its solution with respect to those parameters.

We have finally performed some relevant numerical tests that on one hand show that the RPOD is more accurate than the PGD expansion for three-variate functions, and that on another hand confirm the exponential rate of convergence for the solution of the reaction-diffusion equation, presenting a good agreement with the qualitative and quantitative theoretical expectations.

Further extensive tests for more complex multivariate functions, in particular of practical interest for engineering applications, are in progress and will appear in a forthcoming paper.

References

  1. Azaïez M, Belgacem Ben F. Karhunen-Loève’s truncation error for bivariate functions. Comput Methods Appl Mech Eng. 2015;290:57–72.

    Article  Google Scholar 

  2. Azaiez M, Ben Belgacem F and Chacón Rebollo T. Error bounds for POD expansions of parameterized transient temperatures. Submitted to Comp. Methods App. Mech. Eng.

  3. Berkoz G, Holmes P, Lumley JL. The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluids Mech. 1993;25:539–75.

    Article  MathSciNet  Google Scholar 

  4. Bernardi C, Maday Y. Approximations spectrales de problèmes aux limites elliptiques, Mathématiques et applications. Berlin: Springer; 1992.

    Google Scholar 

  5. Chinesta F, Keunings R, Leygue A. The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer. New York: Springer Publishing Company, Incorporated; 2013.

    MATH  Google Scholar 

  6. Chinesta F, Ladevèse P, Cueto E. A Short Review on Model Order Reduction Based on Proper Generalized Decomposition. Arch Comput Methods Eng. 2011;18:395–404.

    Article  Google Scholar 

  7. De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM J Matr Anal Appl. 2000;21(4):12531278.

    MathSciNet  MATH  Google Scholar 

  8. De Lathauwer L, De Moor B, Vandewalle J. On the Best Rank-One and Rank-R1; R2;.; RN Approximation of Higher Order Tensors. SIAM J Matr Anal Appl. 2000;21(4):13241342.

    Google Scholar 

  9. De Silva V, Lim LH. Tensor rank and the ill posedness of the best low-rank approximation problem. SIAM J Matrix Anal Appl. 2008;20(3):1084–127.

    Article  MathSciNet  MATH  Google Scholar 

  10. Diestel J and Uhl. Vector measures. AMS J J; 1977.

  11. Epureanu BI, Tang LS, Paidoussis MP. Coherent structures and their influence on the dynamics of aeroelastic panels. Int J Non-Linear Mech. 2004;39:977–91.

    Article  MATH  Google Scholar 

  12. Ghanem R and Spanos P. Stochastic finite elements: a spectral approach. Springer-Verlag; 1991.

  13. Golub GH, Van Loan CF. Matrix Computations. 3rd ed. Baltimore: The Johns Hopkins University Press; 1996.

    MATH  Google Scholar 

  14. Heyberger C, Boucard PA, Néron D. A rational strategy for the resolution of parametrized problems in the PGD framework. Comp Meth Appl Mech Eng. 2013;259:40–9.

    Article  MathSciNet  MATH  Google Scholar 

  15. Heyberger C, Boucard PA, Néron D. Multiparametric Analysis within the Proper Generalized Decomposition Framework. Comput Mech. 2012;49(3):277–89.

    Article  MATH  Google Scholar 

  16. Holmes P, Lumley JL, Berkooz G. Coherent Structures, Synamical Systems and Symmetry, Cambridge Monographs on Mechanis. Cambridge: Cambridge University Press; 1996.

    Google Scholar 

  17. Hotelling H. Analysis of a complex of statistical variables into principal componentse. J Educ Psychol. 1933;24:417–41.

    Article  Google Scholar 

  18. Jolliffe IT. Principal Component Analysis. Springer; 1986.

  19. Little G, Reade JB. Eigenvalues of analytic kernels. SIAM J Math Anal. 1984;15:133–6.

    Article  MathSciNet  MATH  Google Scholar 

  20. Loève MM. Probability Theory. Princeton: Van Nostrand; 1988.

    MATH  Google Scholar 

  21. Lorente LS, Vega JM, Velazquez A. Generation of Aerodynamic Databases Using High-Order Singular Value Decomposition. J Aircraft. 2008;45(5):1779–88.

    Article  Google Scholar 

  22. Muller M. On the POD Method. An Abstract Investigation with Applications to Reduced-Order Modeling and Suboptimal Control, Ph D Thesis. Georg-August Universität, Göttingen; 2008.

  23. Nouy A. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput Meth Appl Mech Eng. 2007;196:4521–37.

    Article  MathSciNet  MATH  Google Scholar 

  24. Pearson K. On lines and planes of closest fit system of points in space. Philo Mag J Sci. 1901;2:559–72.

    Article  MATH  Google Scholar 

  25. Trefethen LN. Approximation theory and approximation practice. Software, Environments, and Tools. Philadel-phia: Society for Industrial and Applied Mathematics (SIAM); 2013.

    MATH  Google Scholar 

  26. Willcox K, Peraire J. Balanced Model Reduction via the Proper Orthogonal Decomposition. AIAA. 2002;40:2323–30.

    Article  Google Scholar 

  27. Yano M. A Space-Time Petrov-Galerkin Certified Reduced Basis Method: Application to the Boussinesq Equations. SIAM J Sci Comput. 2013;36:232–66.

    Article  MathSciNet  Google Scholar 

Download references

Authors' contributions

MA, FBB and TCR participated to the development of the mathematical proves and the numerical investigations. They checked the results and wrote the manuscript. All authors read and approved the final manuscript.

Acknowledgements

None.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Azaïez.

Appendix: The PGD representation of multivariate functions

Appendix: The PGD representation of multivariate functions

We describe in this section the procedure to calculate the PGD representation of a multivariate functions. We focus on trivariate functions for the sake of clarity. Its extension to general multivariate functions is straightforward.

The PGD approximation of a trivariate function T searches for an expansion of the form

$$\begin{aligned} T(x,y,z)= \sum _{m \ge 0} X_m(x)\, Y_m(y)\, Z_m(z), \,\, \text{ for } (x,y,z) \in X\times Y \times Z. \end{aligned}$$
(44)

The leading term \(X_0\otimes Y_0\otimes Z_0\) is initially computed by means of an adaptation of the Power Iteration algorithm: Assume known an approximation \(X_0^{(n-1)}\otimes Y_0^{(n-1)}\otimes \, Z_0^{(n-1)}\).

Step 1. Find \(Z_0^{(n)} \in L^2(Z)\) such that for all \(Z^* \in L^2(Z)\),

$$\begin{aligned} \left( \, X_0^{(n-1)}\otimes Y_0^{(n-1)}\otimes \, Z_0^{(n)} - T, \; X_0^{(n-1)}\otimes Y_0^{(n-1)}\otimes \, Z^{*} \,\right) _{L^2(X\times Y\times Z)} = 0. \end{aligned}$$
(45)

Step 2. Find \(\tilde{X}_0^{(n)} \in L^2(X)\) such that for all \(X^* \in L^2(X)\),

$$\begin{aligned} \left( \,\tilde{X}_0^{(n)}\otimes Y_0^{(n-1)}\otimes \, Z_0^{(n)} - T, \; X^{*}\otimes Y_0^{(n-1)}\otimes \, Z_0^{(n)} \,\right) _{L^2(X\times Y\times Z)} = 0. \end{aligned}$$
(46)

Set

$$\begin{aligned} X_0^{(n)}=\frac{\tilde{X}_0^{(n)}}{\Vert \tilde{X}_0^{(n)}\Vert _{L^2(X)}}. \end{aligned}$$

Step 3. Find \(\tilde{Y}_0^{(n)} \in L^2(Y)\) such that for all \(Y^* \in L^2(Y)\),

$$\begin{aligned} \left( \, X_0^{(n)}\otimes \tilde{Y}_0^{(n)}\otimes \, Z_0^{(n)} - T, \; X_0^{(n)}\otimes Y^{*}\otimes \, Z_0^{(n)} \,\right) _{L^2(X\times Y\times Z)} = 0. \end{aligned}$$
(47)

Set

$$\begin{aligned} Y_0^{(n)}=\frac{\tilde{Y}_0^{(n)}}{\Vert \tilde{Y}_0^{(n)}\Vert _{L^2(Y)}}. \end{aligned}$$

The procedure is to be iterated until the error eventually is below a given tolerance.

The Mth mode \(X_M\otimes Y_M \otimes Z_M\) is computed in the same way, by replacing the function T by the residual \(T-\hat{T}_{M-1}\), where now

$$\begin{aligned} \hat{T}_{M-1}(x,y,z)= \sum _{0 \le m \le M-1} X_m(x)\, Y_m(y)\, Z_m(z), \,\, \text{ for } (x,y,z) \in X\times Y \times Z. \end{aligned}$$
(48)

In this way, the residual \(T-\hat{T}_M\) is orthogonal to \(Span(X_M\otimes Y_M \otimes Z_M)\).

There is no proof, up to the knowledge of the authors, that the PGD expansion (44) exists for functions \(T \in L^2(X\times Y \times Z)\) or perhaps with additional regularity, nor that the alternate Power Iteration process (45)–(47) converges. There is a proof, however, that for general functions depending on three or more parameters, there does not exist optimal sub-spaces of finite dimension 3 or larger, that satisfy the optimal approximation property set by Theorem 0.2 (see [9]).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azaïez, M., Belgacem, F.B. & Rebollo, T.C. Recursive POD expansion for reaction-diffusion equation. Adv. Model. and Simul. in Eng. Sci. 3, 3 (2016). https://doi.org/10.1186/s40323-016-0060-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-016-0060-1

Keywords