 Research Article
 Open Access
 Published:
Recursive POD expansion for reactiondiffusion equation
Advanced Modeling and Simulation in Engineering Sciences volume 3, Article number: 3 (2016)
Abstract
This paper focuses on the lowdimensional representation of multivariate functions. We study a recursive POD representation, based upon the use of the power iterate algorithm to recursively expand the modes retained in the previous step. We obtain general error estimates for the truncated expansion, and prove that the recursive POD representation provides a quasioptimal approximation in \(L^2\) norm. We also prove an exponential rate of convergence, when applied to the solution of the reactiondiffusion partial differential equation. Some relevant numerical experiments show that the recursive POD is computationally more accurate than the Proper Generalized Decomposition for multivariate functions. We also recover the theoretical exponential convergence rate for the solution of the reactiondiffusion equation.
Background
Model Reduction methods are nowadays basic numerical tools in the treatment of largescale parametric problems appearing in realworld problems. They are applied with success, for instance, in signal processing, analysis of random data, solution of parametric partial differential equations and control problems, among others. In signal processing, KarhunenLoève’s expansion (KLE) provides a reliable procedure for a low dimensional representation of spatiotemporal signals (see [12, 20]). Different research communities use different terminologies for the KLE. It is named the proper orthogonal decomposition (POD) in mechanical computation (see [3]), referred to as the principal components analysis (PCA) in statistics (see [17, 18, 24]) and data analysis or called singular value decomposition (SVD) in linear algebra (see [13]). These techniques allow large reduction of computational costs, thus making affordable the solution of many parametric problems of practical interest, otherwise out of reach. Let us mention some representative references [5, 6, 11, 16, 26, 27], although this list, by far, is not exhaustive.
The extension of KLE to the tensor representation of multivariate functions is, however, a challenging problem. Real problems are quite often multivariate. Let us mention, for instance the analysis of multivariate stochastic variables, simulation and control of thermal flows and multicomponent mechanics, among many others. Some recent techniques have been introduced to build lowdimensional tensor decompositions of multivariate functions and data. Among them, the HighOrder Singular Value Decomposition (HOSVD) provides lowdimensional approximation of tensor data, in a similar way as the Singular Value Decomposition allows to approximate bivariate data (see [7, 8, 21]). Also, the Proper Generalized Decomposition (PGD) appears to be well suited in many cases to approximate multivariate functions by lowdimensional varieties (see [14, 15]). However, in general there is not an optimal tensor of rank three or larger to approximate a given highdimensional tensor (see [9]).
In this paper we study an alternative method to build lowdimensional tensor decompositions of multivariate functions. This is a recursive POD (RPOD), based upon the successive application of the bivariate POD to each of the modes obtained in the previous step. In each step only one of the parameters is active, while the set of the remaining parameters is considered as a passive single parameter. We introduce a feasible version of the RPOD, in which the expansion is truncated whenever the singular values are smaller than a given threshold. This provides a fast algorithm, as only a small number of modes is computed, just those required to achieve a targeted error level.
As an application, we analyze the velocity of convergence of the RPOD applied to approximate the solution of the reactiondiffusion equation. We prove that the expansion converges with exponential rate. We use as main theoretical tool the CourantFischerWeyl Theorem, that allows to reduce the error analysis of the POD expansion to that of the polynomial approximation of the function to be expanded. We also use the analytic dependence of the solution on the diffusivity and reaction rate coefficients, that yields the exponential convergence rate. This analysis is based upon the one introduced in [2]. Further, we use subsequent bounds for the singular values to construct a practical truncation error estimator, which is used to recursively compute the expansion by the Power Iterate (PI) method [1]. This avoids to compute the full singular value decomposition of the correlation matrix, just the modes needed to attempt a given error threshold are computed. The PI method provides a fast and reliable tool to build the POD expansion of bivariate functions, of which we take advantage to recursively build the RPOD expansion.
We present a battery of numerical tests, in which we apply the RPOD to the representation of threevariate functions, and in particular to the solution of the reactiondiffusion. We compare the RPOD to the PGD expansion. The PGD expansion can be interpreted as the PI method applied to the effective computation of the POD for bivariate functions (see [23]). We here consider its extension to multivariate functions. We obtain exponential convergence rates for both RPOD and PGD expansions, although the RPOD is in general more accurate than the PGD for the same number of modes. We also recover an exponential rate of convergence for the RPOD approximation of the solution of the reactiondiffusion equation, in quite good agreement with the theoretical expectations.
The guidelines of the paper are as follows. “The KarhunenLoève decomposition on Hilbert spaces” section recalls the POD or KarhunenLoève expansion in Hilbert spaces. In “Recursive POD representation” section, we introduce the recursive POD decomposition of multivariate functions, and make a general error analysis. “Analysis of solutions of the reactiondiffusion” section deals with the error analysis for the RPOD expansion of the solution of the reactiondiffusion equation. Finally, in “Numerical tests” section we present some numerical tests where we analyze the practical performances of the recursive POD decomposition of multivariate functions.
Notation—Let \(X\subset {\mathbb R}^d\) be a given Lipschitz domain and G a measure space. We denote by \(L^2(G,X)\) the Bochner space of measurable and square integrable functions from G on X (cf. [10]).
The KarhunenLoève decomposition on Hilbert spaces
The KarhunenLoève decomposition, also known as Proper Orthogonal decomposition (POD in the sequel) provides a technique to obtain lowdimensional approximations of parametric functions. To describe it, let us consider a Hilbert space H endowed with a scalar product \((\cdot ,\cdot )_H\), and a parameter measure space G. Let us consider a function \(f \in L^2(G, H)\), and introduce the POD operator
POD operator is linear and bounded. Moreover, it is selfadjoint and nonnegative. Indeed, it holds \(A=B^*B\), where \(B:H \mapsto L^2(G)\) and its adjoint operator \(B^*:L^2(G) \mapsto H\) are given by
Furthermore, the operator A is compact. This arises because the operator B is compact by the Kolmogorov compactness criterion in \(L^2(G)\) (cf. Muller [22], Chapter 2).
Consequently, there exists a complete orthonormal basis of H formed by eigenvectors \((v_m)_{m \ge 0}\) of A, associated to nonnegative eigenvalues \((\lambda _m)_{m \ge 0}\), that we assume to be ordered in decreasing value. Each nonzero eigenvalue has a finite multiplicity, and 0 is the only possible accumulation point of the spectrum. If H is infinitedimensional, then \(\lim _{m \rightarrow \infty } \lambda _m =0\).
Moreover, consider the correlation operator \(C=BB^*:L^2(G) \mapsto L^2(G)\),
Then the sequence \((\varphi _m)_{m \ge 0}\), with
is an orthogonal basis of \(L^2(G)\). This yields the abstract KarhunenLoève decomposition,
Corollary 0.1
It holds
where the series is convergent in \(L^2(G,H)\).
The main interest of the POD is the following bestapproximation property (cf. [22], Chapter 2):
Lemma 0.2
Let \(V_l=Span(\varphi _1,\ldots ,\varphi _l)\subset H\). Let \(W_l\) be any subspace of H of dimension l. Then
where
denotes the distance from the element \(\varphi \in H\) to the subspace \(W_l\).
Let us next consider the case of bivariate functions. Assume that \(X \subset {\mathbb R}^d\) and \(Y \subset {\mathbb R}^s\) are two bounded domains, d and s are integers \(\ge 1\). Let T be a given function in \( L^2(X\times Y)\), that we want to approximate in a lowdimensional variety. Let us consider the integral operator with kernel T expressed as
The operator B maps \(L^2(X)\) into \(L^2(Y)\), is bounded and has an adjoint operator \(B^*\) defined from \(L^2(Y)\) into \(L^2(X)\) as
We are thus in a particular case of the previous abstract setting, with \(G=Y\), \(H=L^2(X)\) and \(f(\gamma )(x)=T(x,\gamma )\).
Recursive POD representation
The POD expansion may be recursively adapted to the representation of multiparametric functions. Let us consider the case of trivariate functions to avoid unnecessary complexities. Consider a bounded domain \(Z \subset {\mathbb R}^q\) for some integer number \(q \ge 1\), and a trivariate function \(T\in L^2(X \times Y \times Z)\). We identify T with a function of \(L^2(Y \times Z,L^2(X))\) as both spaces are isometric. From Corollary 0.1 we deduce that there exist two orthonormal sets \((v_m)_{m \ge 0}\) and \((\varphi _m)_{m \ge 0}\) which are respectively complete in \(L^2(X)\) and in \(L^2(Y \times Z)\) such that T admits the representation
where the sum is convergent in \(L^2(Y \times Z,L^2(X))\). Moreover the singular values \(\sigma _m \) (that we assume to be ordered in decreasing value) are nonnegative and converge to zero.
We next apply the POD expansion to each mode \(\varphi _m(y,z)\). There exists two orthonormal sets \((u_k^{(m)})_{k \ge 1}\) and \((w_k^{(m)})_{k \ge 1}\) which are respectively complete in \(L^2(Y)\) and \(L^2(Z)\), such that \(\varphi _m\) admits the representation
where the expansion is convergent in \(L^2(Z,L^2(Y))\), which is isometric to \(L^2(Y \times Z)\). Also, the singular values \( (\sigma _{k}^{(m)})_{k \ge 0}\) are nonnegative and decrease to zero. We then haves
Lemma 0.3
The function \(T \in L^2(X \times Y \times Z)\) admits the expansions
where both sums are convergent in \(L^2(X \times Y \times Z)\).
Proof
It is enough to prove that any of both series is absolutely convergent. Consider the partial absolute sum for the first one
As the eigenfunctions are orthonormal (in their corresponding spaces),
where the first inequality holds because \(\Vert \varphi _m\Vert _{L^2(Y \times Z)}=1\), and then \(\displaystyle \sum _{ k \ge 0}\sigma _{k}^{(m)}^2=1\). \(\square \)
Feasible recursive POD representation
To build up a feasible recursive POD (RPOD) representation, consider a partial sum of the POD representation (7),
for some given integers \(K_1 \ge 1,\ldots , K_M \ge 1\). The notation \(P_M\) is a short for the multiindex \((M,K_1,\ldots ,K_M)\). We have
This estimate suggests a practical strategy for the IP method to construct the expansion (8) within a targeted error:
Algorithm FRPOD (Feasible recursive POD representation)
Assume that some estimates of the remainders are computable:
Set a tolerance \(\varepsilon > 0\). Let

Step 1: Compute the modes \(\varphi _m\) and \(v_m\) and singular values \(\sigma _m\) for \(m=1,\ldots ,M_\varepsilon \), until \(\alpha _{M_\varepsilon } \le A \, \varepsilon \).

Step 2: For each \(m=1,\ldots , M_\varepsilon \), compute the modes \(u_m^{(k)}\) and \(w_k^{(m)}\) and the singular values \(\sigma _m^{(k)}\) for \(k=1,\ldots ,K_m\), until \(\beta _{K_m}^{(m)} \le B \, \varepsilon \).
For smooth functions the singular values decrease very fast, so that good estimators of the remainders are \(\alpha _M = \sigma _{M+1}\), \(\beta _K^{(m)} = \sigma _{K+1}^{m}\). For less smooth functions, some more summands of the series defining the remainders could be need. In “Analysis of solutions of the reactiondiffusion” section we shall obtain estimators \(\alpha _M \) and \(\beta _K^{(m)} \) when T is the solution of the reactiondiffusion, considered as a function depending on three parameters: The diffusivity, the reaction speed and the spacetime variable. These estimators decrease exponentially with M.
Lemma 0.4
Let \(T_{\varepsilon }\) the representation of T provided by Algorithm FRPOD within an error level \(\varepsilon \). It holds
Proof
where we have used that \( \displaystyle \sum _{ 0 \le m \le M} \sigma _m^2 \le \Vert T\Vert _{L^2(X \times Y \times Z)}^2\). \(\square \)
In practice we recursively compute the expansion by the PI method [1]. This avoids to compute the full singular value decomposition of the correlation matrix, we just compute the modes needed to reach a given error threshold.
Quasioptimality of recursive POD representation
The POD representation in general provides the most accurate representation in \(L^2\) norm, for a given number of truncation modes. This is due to the bestapproximation property stated in Lemma 0.2. Let us consider a trivariate approximation of T with M modes, of the form
Lemma 0.5
Let \(T \in L^2(X \times Y \times Z)\). It holds
where
and \(\hat{T}_M\) is any trivariate approximation of T with M modes, of the form (13).
Proof
Let \(V_M\) the space spanned by \(v_1,\ldots ,v_M\) in \(L^2(Y \times Z)\). Observe that \(T_M\) is the orthogonal projection in \(L^2(Y \times Z)\) of T on \(V_M\). Let \(W_M\) be any subspace of dimension M of \(L^2(Y \times Z)\). Then, due to Lemma 0.2, it holds
for any \(S_M \in W_M\), where we denote \(T(x)(y,z)=T(x,y,z)\), and similarly \(T_M(x)\) and \(S_M(x)\). As the spaces \(L^2(X,L^2(Y \times Z))\) and \(L^2(X \times Y \times Z)\) are isometric, taking \(S_M= \hat{T}_M\), the inequality (14) follows. \(\square \)
Note that in particular this implies that the POD expansion (15) is more accurate than the threevariate PGD one.
The following result states the quasioptimality of the feasible RPOD with representations.
Lemma 0.6
It holds
for any trivariate approximation \(\hat{T}_M\) of T with M modes, of the form (13).
Proof
We have
where the secondtolast estimate is obtained similarly to the proof of Lemma 0.4, and the last one follows from Lemma 0.5. \(\square \)
Then, the feasible RPOD representation is more accurate than \(\hat{T}_M\), for \(\varepsilon \) small enough, if the inequality in (16) is strict. If (16) is an equality, this means that \(\hat{T}_M\) is optimal. In this case the accuracy of the feasible RPOD representation can be made arbitrarily close to the optimal one. It should be noted, however, that the RPOD contains more modes than \(\hat{T}_M\). Anyhow, we present some numerical experiments in “Numerical tests” section that show than the RPOD representation is more accurate than the PGD one, for the same number of modes.
Analysis of solutions of the reactiondiffusion
Let us now consider the homogeneous Dirichlet boundary value problem of the linear reactiondiffusion equation,
where \(\gamma >0\) and \(\alpha \ge 0\) respectively denote the diffusivity and the reaction rate, and \(\mathcal{Q}=\Omega \times (0,b)\). This problem fits into the functional framework of constantcoefficient linear parabolic equations, and admits a unique solution \(T \in L^2((0,b),H^1(\Omega ))\) such that \(\partial _t T \in L^2(\mathcal{Q})\) if \(f \in L^2(\mathcal{Q})\) and \(T_0 \in L^2(\Omega )\). We shall assume that the pair \((\gamma ,\alpha )\) ranges in a set \(\mathcal{G}=[\gamma _m, \gamma _M]\times [\alpha _0,\alpha _M]\) with \(0<\gamma _m <\gamma _M\), \(0\le \alpha _0 \le \alpha _M\). Our purpose in this section is to analyze the rate of convergence of the approximation of T by a recursive POD expansion in separated tensor form:
where \(P=(M,I)\), the \(\tau _i^{(m)}\) are real numbers and \(\varphi _i^{(m)} \in L^2(\gamma _m, \gamma _M)\), \( w_i^{(m)} \in L^2(0,\alpha _M)\) and \(v_m \in L^2(\mathcal{Q})\) are eigenmodes. To obtain this expression, let us start from the POD expansion of T where \(\mu =(\gamma ,\alpha )\in \mathcal{G}\) and \(z=(x,t) \in \mathcal{Q}\),
where the expansion converges in \(L^2(\mathcal{G}\times \mathcal{Q})\). As \(\varphi _m \in L^2(\mathcal{G})\), it also admits a POD expansion
which is convergent in \(L^2(\mathcal{G})\), where \(\{u_i^{(m)}\}_{i \ge 0}\) is an orthonormal basis of \(L^2(\gamma _m, \gamma _M)\) and \(\{w_i^{(m)}\}_{i \ge 0}\) is an orthonormal basis of \(L^2(0,\alpha _M)\). If we truncate the expansion (19) for T to \(M+1\) summands and that (20) for \(\varphi _m\) at \(I+1 \) summands, then we recover the expression for \(T_P\) in (18) where \(\tau _i^{(m)} = \sigma _m \, \sigma _i^{(m)}\).
To analyze the rate of convergence of \(T_P\) towards T, we need some technical tools. Let us consider the orthonormal Fourier basis \(\{e_k\} \) of \( L^2(\Omega )\) formed by the eigenfunctions of the Laplace operator. It holds
where \(\lambda _k >0\) is the eigenvalue associated to \(e_k\). The sequence \(\{\lambda _k\}_{k \ge 0}\) is ordered to be nondecreasing with \(\displaystyle \lim _{k \rightarrow \infty }\lambda _k=0\). We decompose \(T_0\) and f as
where the series are respectively convergent in \(L^2(\Omega )\) and \(L^2(\mathcal{Q})\), and
The solution of the reactiondiffusion equations is then expanded in terms of the eigenfunctions \(e_k\),
where the coefficients \(\theta _k\) are defined by
We shall consider T as a mapping from \(\mathcal G\) into \(L^2(\mathcal{Q})\) that brings a couple \((\gamma , \alpha ) \in \mathcal{G}\) into the function \(T((\cdot ,\cdot ), (\gamma ,\alpha )) \in L^2(\mathcal{Q})\), that we denote \(T_{(\gamma ,\alpha )}\).
Our main result is the following.
Theorem 0.7
The truncated POD series expansion \(T_P\) given by (18) satisfies the error estimate
for any \(1< \rho <\rho _*\), where \(C_{\rho }>0\) is a constant depending on \(\rho \), unbounded as \(\rho \rightarrow 1\), and \(\rho _*=(\sqrt{\gamma _M}+\sqrt{\gamma _m})/(\sqrt{\gamma _M}\sqrt{\gamma _m})\).
Therefore, the recursive POD expansion converges with spectral accuracy in terms of the number of truncation modes in the main and secondary expansions.
The proof of this result is essentially based upon the analyticity of T with respect to diffusivity \(\gamma \) and reaction rate \(\alpha \). It is rather technical, and will come up after several lemmas, the first of which is
Lemma 0.8
The mapping \((\gamma , \alpha ) \in \mathcal{G}\mapsto T_{(\gamma ,\alpha )}\in L^2(\mathcal{Q})\) is analytic.
Proof
According to (23), T is the sum of two contributions, coming from the initial condition \(T_0\) and the source f. We prove the analyticity for each of them.
i.—Let us consider the part generated by the initial condition, corresponding to
Let us bound the residual
for any \(L>0\). Then the series uniformly converges on each set \([\epsilon ,+\infty [\times [0,+\infty [\), for all \(\epsilon >0\). As each term in the series (23) determines an analytic function from \(\mathcal G\) into \(L^2(\mathcal Q)\), then the limit is analytic from \((0,+\infty ) \times (0,+\infty )\) into \(L^2(\mathcal Q)\).
ii.—Let us now investigate the part arisen from the source f, corresponding to
This needs the preliminary statement.
Let \(g \in L^2(0,b)\) and \(\lambda >0\), \(\alpha \ge 0\) be given, the function
mapping \(]0,+\infty [ \times ]0,+\infty [\) into \(L^2(0,b)\), is analytic.
To prove it, we show that \((\gamma ,\alpha ) \mapsto G(\gamma ,\alpha )\) is locally expressed as a convergent power series. Let \(\gamma _0>0\), \(\alpha _0 >0\) be fixed. On account of the analyticity of the exponential we derive that
This series is absolutely convergent in \(L^2(0,b)\). Indeed, the integral term being a convolution, then Young’s inequality can be used which implies that
The geometrical series is convergent for \((\lambda ,\alpha )\) such that \((\lambda \gamma +\alpha )(\lambda \gamma _0+\alpha _0) < \eta \) provided that \(\eta <\lambda \gamma _0+\alpha _0\). Then, the function \( G: \,]0,+\infty [ \times ]0,+\infty [\mapsto L^2(0,b)\) is analytic.
To finish the proof, let us check out that the series (23) with \(\theta _k\) given by (25) is uniformly convergent in \([\epsilon ,+\infty [\times [0,+\infty [\), for all \(\epsilon >0\). For a given L we have
Then the series (23) of analytic functions is uniformly convergent. As a result, the limit is also analytic. The proof is complete. \(\square \)
Another preliminary tool required in our study is related to the polynomial approximation of regular vectorvalued functions. We shall adapt a result by S. Bernstein (in 1912), stated for complexvalued functions, and improved since then in many works (see for instance [19]). For some \(\rho >1\), let the set \({ E}_\rho \) in the complex plan be defined as
Consider a function \(F:{ E}_\rho \rightarrow H \) where H is a Hilbert space. For a given integer number \(M \ge 0\) let \(F_M\) be the truncated Chebyshev polynomial series expansion of F of degree M with coefficients in H. The shape of the polynomial \(F_M\) will be fixed later on (see Remark 0.2). Following the proof as exposed in [19], we come up with
Lemma 0.9
Assume that F is analytic and bounded in \(E_\rho \). There holds that
Remark 0.1
The constant in the lemma may be fixed to (see [25, Theorem 8.2])
that blows up as \(\rho \) goes to unity.
We now need to derive similar approximation estimates for analytic vector valued functions defined from \(\mathcal G\) into \(L^2(\mathcal Q)\). The following result holds
Lemma 0.10
For any \(\alpha \in [0,\alpha _M]\) there exists a polynomial \(S_M^{(\alpha )}\) ranging from \([\gamma _0,\gamma _M]\) into \(L^2(\mathcal Q)\), with degree \(\le M\), such that for all \(\rho \, (1<\rho < \rho _*)\),
where \(\hat{C}_\rho \) is a nonnegative constant, possibly unbounded as \(\rho \rightarrow 1\).
Proof
We only give a sketch of the proof. Following the result by Lemma 0.8, for any given \(\alpha \ge 0\), the vectorvalued function \(\gamma \in {\mathbf {C}} \mapsto T(\gamma ,\alpha )\) is analytic in \(\mathcal{R}e \gamma >0\). This implies that provided that \(\rho <\rho _*\), the ellipse
is included in the analyticity set of T. Consider thus the coordinates transformation
It is affine and bijective from \( E_\rho \) into \( \mathcal E_\rho \) and transforms the reference interval \([1,1]\) into \(G=[\gamma _m, \gamma _M]\). This transformation makes it possible to construct such a polynomial \(S_M^{(\alpha )}\). In fact, we start by constructing the truncated Chebyshev series expansion \(\hat{S}_M^{(\alpha )}(\hat{\zeta })\) of the (transformed) function \(\hat{T}^{(\alpha )} (\hat{\zeta }) = T (\zeta ,\alpha )\). Then, back to the interval \([\gamma _m,\gamma _M]\), we set \(S_M^{(\alpha )}(\zeta ) = \hat{S}_M^{(\alpha )}(\hat{\zeta })\).
To obtain the error estimate, from Lemma 0.9 we obtain
where
which is finite and independent of \(\alpha \in [0,\alpha _M]\) due to the uniform boundedness of \(T(\gamma ,\alpha )\) in compact sets of \((0,+\infty ) \times (0,+\infty )\). The proof is complete. \(\square \)
Remark 0.2
The polynomial \(S_M^{(\alpha )}\) may be put under the form
where \(U_m\) stands for the polynomial obtained by transporting the Chebyshev polynomial of degree m defined in \([1,1]\) to the interval G, and the coefficients \((w_m^{(\alpha )})_{0\le m\le M}\) belong to \(L^2(\mathcal Q)\).
Proof of Theorem 0.7
Let us consider the truncated primary expansion
for some integer \(M\ge 0\). Let \(S_M\) be the vectorvalued polynomial (considered as a function of \((\gamma ,\alpha )\)) constructed in Lemma 0.10. In view of Lemma 0.2 and Remark 0.2, the following identity holds,
Applying the result stated in Lemma 0.10 it follows that
Next, observe that as the sequence \((v_m)_{m \ge 0}\) is orthonormal in \(L^2(\mathcal{Q})\), then
where
is the truncated POD expansion of \(\varphi _m\) to \(I+1\) terms. Also, by (2),
Then \(\varphi _m\) is an analytic function from \((0,+\infty )\times (0,+\infty )\) into \({\mathbb R}\). By an argument similar to that of Lemma 0.10, we prove that for any \(\alpha \in [0,\alpha _1]\) there exists a polynomial in \(\gamma \), \(r_{I,\alpha }^{(m)}(\gamma )\), of degree least or equal than I such that
From (28), we deduce \( \sigma _m \,\varphi _m(\gamma ,\alpha ) \le \Vert T(\gamma ,\alpha )\Vert _{L^2(\mathcal{Q})} \) for all \((\gamma ,\alpha ) \in \mathcal{G}\), and then
for some constant \(K \ge 0\) independent of m and \(\alpha \). Consequently, in view of Lemma 0.2,
where \(r_I^{(m)}(\gamma ,\alpha )= r_{I,\alpha }^{(m)}(\gamma )\) and \(\displaystyle \overline{C}_\rho = \mathcal{G}^{1/2}\,\frac{K}{1\rho }\). From (27) we deduce that
Combining this estimate with (26) completes the proof. \(\square \)
Remark 0.3

The constant \(C_\rho \) in estimate (24) also depends on the parameters domain \(\mathcal G\). We do not make explicit this dependence to simplify the notation.

The limit value for the convergence rates \(\rho _*\) only depends on the ratio \(\gamma _M/\gamma _m\), as
$$\begin{aligned} \rho _*= \frac{2}{\sqrt{\frac{\gamma _M}{\gamma _m}} 1}+1. \end{aligned}$$ 
In view of estimate (24), in general a quasioptimal choice for I is \(I=M+\displaystyle \frac{1}{2}\log M\) (actually, the closest integer to this number). In this case,
$$\begin{aligned} \Vert TT_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \le C_\rho \, \rho ^{M}. \end{aligned}$$We thus obtain the same asymptotic convergence order when \(M \rightarrow \infty \) as for \(\Vert TT_M\Vert _{L^2(\mathcal{G}\times \mathcal{Q})} \).

For more general parameterdepending parabolic equations, the above technique applies if the elliptic operator is symmetric. This allows to diagonalize the problem and expand the solution as a series in terms of the eigenfunctions of the elliptic operator. The use of Courant–Fischer–Weyl Theorem [19] allows to reduce the estimate of the truncation error of the POD expansion to the estimate of the interpolation error of the solution with respect to one of the parameters, eventually by polynomial functions. Then the convergence rate of the POD expansion will depend on the smoothness of the solution with respect to the parameters of the problem.
Reordering of recursive POD expansion
A practical way to reorder expansion (18) is in decreasing order of the values \( \tau _m^{(i)}=\sigma _m \,\sigma _m^{(i)}. \) This leads to an expansion of the form
where the sets \(\{\tau _m^{(i)}\,,\, m=0,\ldots ,M,\, i=1,\ldots ,I\}\) and \(\{\tilde{\tau }_l\,,\, l=0,\ldots ,L\}\) coincide, and \(\tilde{\tau }_0 \ge \tilde{\tau }_1 \ge \cdots \ge \tilde{\tau }_L\).
To analyze the rate of convergence of this rearrangement of the RPOD expansion, let us at first remark that Theorem 0.7 allows, as a byproduct, to estimate the singular values \(\sigma _m\) and \(\sigma _m^{(i)}\). Indeed, denote by \({\mathcal L (E,F)}\) the set of linear bounded mappings from a Banach space E into a Banach space F, the following bound holds,
where
Consider the operator
Then by estimate (26)
Similarly,
where \(\displaystyle (E^{(m)} u) (\alpha ) = \int _G \varphi ^{(m)}(\gamma ,\alpha ) \, u(\gamma )\; d\gamma , \quad \forall \alpha \in [0,\alpha _1]. \) Let us assume that the \(\varphi ^{(m)}\) satisfy the additional (slightly) stronger boundedness property
Then, in view of estimate (29),
where \(\displaystyle (\tilde{E}_{I}^{(m)} u) (\alpha ) = \int _G r_{I}^{(m)}(\gamma ,\alpha ) \, u(\gamma )\; d\gamma , \quad \forall \alpha \in [0,\alpha _1]. \)
Then the error associated to this reordering, for large M and I, is estimated by
for any \(1< {\rho } < \rho _*\), where \(D_\rho \) is a constant, possibly unbounded as \({\rho } \rightarrow 1\). To justify it, let us write \(T_P\) as
where for simplicity we assume that Mand I are such that \(L=(K+1)(K+2)/2 \) for some integer \(K\ge 0\). For other values there will appear a residual corresponding to high order modes that will be asymptotically negligible, as it is of larger order with respect to \(\rho \). If estimates (34) and (36) are sharp, it holds
for some constant \(A_\rho \). Then, \(\tau _i^{(m)} < \tau _j^{(n)}\) if \(i+m > j+n\) and consequently the set \(\{\tilde{\tau }_l, \,(k(k+1)/2)1\le l \le (k+1)(k+2)/2\,\} \) coincides with the set \(\{\tau _i^{(m)},\, m+i=k\,\}\). Then, due to estimate (38),
As \(\displaystyle \sum _{k \ge K+1} (k+1) \, \rho ^{2k}\simeq (K+1) \, \rho ^{2(K+1)}\), \(K \simeq \sqrt{2L}\), then (37) follows.
Practical implementation
Assume again that estimate (36) is sharp. Then \(\displaystyle \sum _{i \ge I+1} \sigma _i^{(m)}^2 \simeq C_m' \, \rho ^{2I} \simeq \sigma _{I+1}^{(m)}^2\). Thus, we may set the estimator \(\beta _I^{(m)}=\sigma _{I+1}^{(m)}\), and similarly \(\alpha _M = \sigma _{M+1}\), in (10). This suggests to consider a different number of summands in the secondary expansions of (18), what leads to an expansion as in (8),
where M and \(I_m\) are determined to fit the error tolerance tests \(\sigma _M \le A\, \varepsilon \) and \(\sigma _{I_m}^{(m)} \le B \, \varepsilon \) where A and B are given in (11). In practice for simplicity these may be replaced by \(\sigma _{M+1} \le \varepsilon \) and \(\sigma _{I_{m}+1}^{(m)} \le \varepsilon \).
Also, in view of (38) and (39), we deduce that a good estimator the error \(\Vert TT_P\Vert _{L^2(\mathcal{G}\times \mathcal{Q})}\) is \(\tau _I^{(M)}\), associated to the last computed mode, such that \(I+M=K\).
Numerical tests
This section is devoted to the comparison of the practical performances of the feasible RPOD expansion. In particular, we confirm the exponential rate of convergence of the truncated POD expansion for the diffusionreaction equation proved in section “Analysis of solutions of the reactiondiffusion”. We are also interested in comparing the rate of convergence of RPOD and PGD expansions, as the latter is particularly well suited to approximate multivariate functions. We have considered functions with high and low smoothness, as the smoothness plays a crucial role in the decreasing of the size of the modes in both expansions. In addition we have tested the ability of both representations to approximate functions that already have a separated tensor structure. For completeness we describe in the Appendix the application of the PGD expansion to approximate multivariate functions.
Multivariate functions
In this test we apply the RPOD and the PGD to approximate multivariate functions. Actually we consider trivariate functions a generic test to determine the relative performances of both expansions. We have considered the following tests:
Case 1: Function with tensor structure.
Case 2: Function with non tensor structure.
Case 3: Function with low regularity
The space domain is fixed to \(\Omega =X \times Y \times Z\), with \(X=Y=Z=]1,1[\) and Gauss–Lobatto–Legendre quadrature is used (see [4]) with the polynomial degree equal to \(N=64\). These formulas are used to evaluate the matrix representation of the operators B and A.
We set the tolerance error in \(L^2(X \times Y \times Z)\) in the residual of both RPOD and PGD expansions to \(\mu = 10^{7}\). This corresponds to \(\varepsilon =10^{14}\) in Algorithm FRRPOD. We have displayed in Figs. 1, 2, 3 the comparison of the convergence history of the feasible RPOD and PGD processes, for all the threevariate functions considered. The xaxis represents the number of eigenmodes while the yaxis represents the \(L^2(X \times Y \times Z)\) error, in logarithmic coordinates. We observe in Fig. 1 that the RPOD just needs 3 modes to fit a function that already has a separated tensor structure, while the PGD requires 17 modes to reach the error level. Further, that for functions with low smoothness both expansions require approximately the same number of modes to reach a moderate accuracy, however the RPOD is more efficient to reach high accuracy in all cases. Finally, that the error associated to the RPOD expansions is almost in all cases below the one associated to the PGD one for the same number of modes.
Table 1 displays the number of modes required by each expansion to reach the error below the threshold \(\mu = 10^{7}\). We observe that both expansions appear to converge for all cases considered, although in general both require a larger number of modes to approximate functions with lower smoothness. Also, that in all cases considered the RPOD requires less modes than the PGD.
ReactionDiffusion equation
This part is devoted to determining the effective convergence rate of the RPOD approximation of some solutions to the transient reactiondiffusion equation when parameterized by the diffusivity and reaction coefficients. We assess the exponential convergence rate and investigate the variation of this rate with respect to the set \(\mathcal{G}=[\gamma _m,\gamma _M]\times [\alpha _m,\alpha _M]\).
Test 1: Exponential convergence rate.
We consider the timedependent reactiondiffusion equation in the domain \(\mathcal Q=(0,1)\times (0,1)\) and we select three possible pairs of source terms and initial conditions, given by
These data have mild singularities, so the temperature solutions of (17), have a reduced regularity with respect to x and t, in particular for \(t=0\) for the two last data. The heat problem is discretized by an Euler scheme/Gauss–Lobatto–Legendre spectral method see [4] (the time step is \(\delta t = 10^{2}\) and the polynomial degree is \(N=64\)).
Calculation for the matrix representations of the operators B and A are realized by means of accurate quadrature formulas. Indeed, various integrals (with respect to either \(\gamma \), \(\alpha \) or (t, x)) are computed using GaussLobatto quadrature formulas with high resolution in the corresponding intervals.
Figure 4 shows the convergence history of the RPOD expansion for the reactiondiffusion equation (40) in terms of the total number of modes in the expansion. We have considered the sets of diffusivities \(\gamma \in [1,51]\), and reaction rates \(\alpha \in [0,100]\). The error is measured in \(L^2(\mathcal{Q})\) norm. The numbers of secondary modes \(I_m\) has been determined to fit the test \(\sigma _m^{(I_m+1)} \le \varepsilon =10^{10}\). In practice a small amount of secondary modes (actually, \(I_m \simeq 4\)) is needed to fit this test. The modes have been rearranged in decreasing order of the effective singular values \(\tau _i^{(m)}=\sigma _m \, \sigma _i^{(m)}\) (denoted by a \(\CIRCLE \) symbol). We observe that the \(\tau _i^{(m)}\) indeed are good error estimators for this rearranged expansion, as was argued in “Practical implementation” section.
To assess the regularity of the eigenmodes associated to the conductivity parameter \(\gamma \) we choose to plot the three first corresponding to the most important singular values. The computational is made for case of Data 3. Based on Fig. 5 we clearly observe that these functions are regular. Same observation is made for the reaction parameter
Test 2: Dependence of the convergence rate with respect to the parameters range.
The dependence with respect to the ratio of diffusivities \(R=\gamma _M/\gamma _m\) of the exponential convergence rate, stated by Theorem 0.7, is illustrated in Fig. 6. We depict the convergence history for Data 3, computed for \(R=25, 64\) and 400, in all cases with a fixed interval of reaction rates \([\alpha _m,\alpha _M]=[0,100]\), with respect to the square root of the numbers of modes, \(M=\sqrt{L}\). We can point out that the convergence rate degrades as R increases, in accordance with the fact that
We observe some gap between the purely exponential decay of the error and the computed one, as the error curve in logarithmic coordinates appears to be a slightly concave curve instead of a straight line. This is consistent with the presence of the factor \(L^{1/4}\) in estimate (37).
In Table 2, we present the computed exponential convergence rate \(\alpha _c=2 \,\log \rho _c\), so that the \(L^2(\mathcal{G}\times \mathcal{Q})\) error, in terms of the number of modes after rearranging the RPOD series, is assumed to satisfy
and the theoretical one given by \(\alpha ^*=2\,\log \rho _*\). The value \(\alpha _c\) is calculated by exponential regression. We indeed recover an exponential rate of convergence with respect to the square root of the number of modes, with an effective convergence rate larger than the theoretical one. We numerically state that the computed rate in all cases is larger than one (see Table 2). We thus observe a kind of superconvergence effect.
We next test the dependence of the convergence rate with respect to the interval of reaction rates \([\alpha _0,\alpha _M]\). We show in Fig. 7 the convergence rates history corresponding to \(\alpha _m=0\), \(\alpha _M= 10,100,500,1000\) for fixed \(\gamma _m=1\), \(\gamma _M=51\). We observe a decrease of the rate as \(\alpha _M\) increases, that however appears to be uniformly bounded, in agreement with estimate (37), where the dependence of the error bound with respect to \([\alpha _0,\alpha _M]\) only appears through the coefficient \(D_\rho \).
The last numerical experiment studies wether the dependence of the exponential convergence on the diffusivities range \([\gamma _m,\gamma _M]\) indeed takes place in terms of the ratio \(R=\gamma _M/\gamma _m\). This is confirmed by the result plot in Fig. 8, where we consider the couples \((\gamma _m,\gamma _M)=(1,2)\) and (4, 8), corresponding to \(R=2\), and \((\gamma _m,\gamma _M)=(1,25)\) and (4, 100), corresponding to \(R=25\), with fixed \(\alpha _0=0\), \(\alpha _M=100\).
Conclusion
We have introduced in this paper a recursive POD (RPOD) expansion to approximate multivariate functions. The approach consists in building truncated recursive POD expansions of the modes that appear in the expansions at the previous level, to a given tolerance error. We have constructed a practical truncation error estimator by means of bounds for the singular values, which is used to recursively compute the expansion by the Power Iterate (PI) method. This allows to compute just the modes needed to attempt a given error threshold. We have proved the quasioptimality of this RPOD expansion in \(L^2\), similar to that of the POD expansion.
We have proved the exponential rate of convergence of the RPOD expansion for the solution of the reactiondiffusion equation, based upon the analyticity of its solution with respect to those parameters.
We have finally performed some relevant numerical tests that on one hand show that the RPOD is more accurate than the PGD expansion for threevariate functions, and that on another hand confirm the exponential rate of convergence for the solution of the reactiondiffusion equation, presenting a good agreement with the qualitative and quantitative theoretical expectations.
Further extensive tests for more complex multivariate functions, in particular of practical interest for engineering applications, are in progress and will appear in a forthcoming paper.
References
 1.
Azaïez M, Belgacem Ben F. KarhunenLoève’s truncation error for bivariate functions. Comput Methods Appl Mech Eng. 2015;290:57–72.
 2.
Azaiez M, Ben Belgacem F and Chacón Rebollo T. Error bounds for POD expansions of parameterized transient temperatures. Submitted to Comp. Methods App. Mech. Eng.
 3.
Berkoz G, Holmes P, Lumley JL. The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluids Mech. 1993;25:539–75.
 4.
Bernardi C, Maday Y. Approximations spectrales de problèmes aux limites elliptiques, Mathématiques et applications. Berlin: Springer; 1992.
 5.
Chinesta F, Keunings R, Leygue A. The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer. New York: Springer Publishing Company, Incorporated; 2013.
 6.
Chinesta F, Ladevèse P, Cueto E. A Short Review on Model Order Reduction Based on Proper Generalized Decomposition. Arch Comput Methods Eng. 2011;18:395–404.
 7.
De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM J Matr Anal Appl. 2000;21(4):12531278.
 8.
De Lathauwer L, De Moor B, Vandewalle J. On the Best RankOne and RankR1; R2;.; RN Approximation of Higher Order Tensors. SIAM J Matr Anal Appl. 2000;21(4):13241342.
 9.
De Silva V, Lim LH. Tensor rank and the ill posedness of the best lowrank approximation problem. SIAM J Matrix Anal Appl. 2008;20(3):1084–127.
 10.
Diestel J and Uhl. Vector measures. AMS J J; 1977.
 11.
Epureanu BI, Tang LS, Paidoussis MP. Coherent structures and their influence on the dynamics of aeroelastic panels. Int J NonLinear Mech. 2004;39:977–91.
 12.
Ghanem R and Spanos P. Stochastic finite elements: a spectral approach. SpringerVerlag; 1991.
 13.
Golub GH, Van Loan CF. Matrix Computations. 3rd ed. Baltimore: The Johns Hopkins University Press; 1996.
 14.
Heyberger C, Boucard PA, Néron D. A rational strategy for the resolution of parametrized problems in the PGD framework. Comp Meth Appl Mech Eng. 2013;259:40–9.
 15.
Heyberger C, Boucard PA, Néron D. Multiparametric Analysis within the Proper Generalized Decomposition Framework. Comput Mech. 2012;49(3):277–89.
 16.
Holmes P, Lumley JL, Berkooz G. Coherent Structures, Synamical Systems and Symmetry, Cambridge Monographs on Mechanis. Cambridge: Cambridge University Press; 1996.
 17.
Hotelling H. Analysis of a complex of statistical variables into principal componentse. J Educ Psychol. 1933;24:417–41.
 18.
Jolliffe IT. Principal Component Analysis. Springer; 1986.
 19.
Little G, Reade JB. Eigenvalues of analytic kernels. SIAM J Math Anal. 1984;15:133–6.
 20.
Loève MM. Probability Theory. Princeton: Van Nostrand; 1988.
 21.
Lorente LS, Vega JM, Velazquez A. Generation of Aerodynamic Databases Using HighOrder Singular Value Decomposition. J Aircraft. 2008;45(5):1779–88.
 22.
Muller M. On the POD Method. An Abstract Investigation with Applications to ReducedOrder Modeling and Suboptimal Control, Ph D Thesis. GeorgAugust Universität, Göttingen; 2008.
 23.
Nouy A. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput Meth Appl Mech Eng. 2007;196:4521–37.
 24.
Pearson K. On lines and planes of closest fit system of points in space. Philo Mag J Sci. 1901;2:559–72.
 25.
Trefethen LN. Approximation theory and approximation practice. Software, Environments, and Tools. Philadelphia: Society for Industrial and Applied Mathematics (SIAM); 2013.
 26.
Willcox K, Peraire J. Balanced Model Reduction via the Proper Orthogonal Decomposition. AIAA. 2002;40:2323–30.
 27.
Yano M. A SpaceTime PetrovGalerkin Certified Reduced Basis Method: Application to the Boussinesq Equations. SIAM J Sci Comput. 2013;36:232–66.
Authors' contributions
MA, FBB and TCR participated to the development of the mathematical proves and the numerical investigations. They checked the results and wrote the manuscript. All authors read and approved the final manuscript.
Acknowledgements
None.
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Appendix: The PGD representation of multivariate functions
Appendix: The PGD representation of multivariate functions
We describe in this section the procedure to calculate the PGD representation of a multivariate functions. We focus on trivariate functions for the sake of clarity. Its extension to general multivariate functions is straightforward.
The PGD approximation of a trivariate function T searches for an expansion of the form
The leading term \(X_0\otimes Y_0\otimes Z_0\) is initially computed by means of an adaptation of the Power Iteration algorithm: Assume known an approximation \(X_0^{(n1)}\otimes Y_0^{(n1)}\otimes \, Z_0^{(n1)}\).
Step 1. Find \(Z_0^{(n)} \in L^2(Z)\) such that for all \(Z^* \in L^2(Z)\),
Step 2. Find \(\tilde{X}_0^{(n)} \in L^2(X)\) such that for all \(X^* \in L^2(X)\),
Set
Step 3. Find \(\tilde{Y}_0^{(n)} \in L^2(Y)\) such that for all \(Y^* \in L^2(Y)\),
Set
The procedure is to be iterated until the error eventually is below a given tolerance.
The Mth mode \(X_M\otimes Y_M \otimes Z_M\) is computed in the same way, by replacing the function T by the residual \(T\hat{T}_{M1}\), where now
In this way, the residual \(T\hat{T}_M\) is orthogonal to \(Span(X_M\otimes Y_M \otimes Z_M)\).
There is no proof, up to the knowledge of the authors, that the PGD expansion (44) exists for functions \(T \in L^2(X\times Y \times Z)\) or perhaps with additional regularity, nor that the alternate Power Iteration process (45)–(47) converges. There is a proof, however, that for general functions depending on three or more parameters, there does not exist optimal subspaces of finite dimension 3 or larger, that satisfy the optimal approximation property set by Theorem 0.2 (see [9]).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Azaïez, M., Belgacem, F.B. & Rebollo, T.C. Recursive POD expansion for reactiondiffusion equation. Adv. Model. and Simul. in Eng. Sci. 3, 3 (2016). https://doi.org/10.1186/s4032301600601
Received:
Accepted:
Published:
Keywords
 Recursive POD
 High Order SVD
 Model Reduction
 Multivariate functions
 PGD