# Error estimation and adaptivity for PGD based on complementary solutions applied to a simple 1D problem

## Abstract

Reduced order methods are powerful tools for the design and analysis of sophisticated systems, reducing computational costs and speeding up the development process. Among these reduced order methods, the Proper Generalized Decomposition is a well-established one, commonly used to deal with multi-dimensional problems that often suffer from the curse of dimensionality. Although the PGD method has been around for some time now, it still lacks mechanisms to assess the quality of the solutions obtained. This paper explores the dual error analysis in the scope of the PGD, using complementary solutions to compute error bounds and drive an adaptivity process, applied to a simple 1D problem. The energy of the error obtained from the dual analysis is used to determine the quality of the PGD approximations. We define a new adaptivity indicator based on the energy of the error and use it to drive parametric h- and p- adaptivity processes. The results are positive, with the indicator accurately capturing the parameter that will lead to lowest errors.

## Introduction

Computational methods have become an essential part of most high end engineering projects. They greatly simplify the design and analysis of highly complex systems and are a must for companies that aim to become and stay competitive. When modeling most demanding systems a high computational effort is needed, leading to a low response time between the identification of the details of the model to be considered and the availability of the response. Furthermore, the modification of details in the model leads to re-analyses of the process, further delaying the determination of the desired response. This is a major issue in our current quick paced reality, which relies in the premise that the faster you can come up with a reliable solution, the better. Reduced order methods appear as an answer for this demand.

The main idea behind reduced order methods is to formulate a model that takes only the essential parts of a simulation, reducing the computational time needed to perform a complex analysis while aiming to maintain the accuracy of the results. The Proper Generalized Decomposition (PGD), which we use in this work, is one of the several reduced order methods that exist. The main characteristics of this method is its a priory nature, which avoids the need to perform the full simulation beforehand in arbitrarily selected instances .

One of the challenges that the application of PGD faces is that the method lacks a posteriori estimations tools and adaptivity strategies [2,3,4,5]. Another issue with PGD is its complexity when dealing with geometry as one of the parameters [6, 7].

This paper has the objective of showing the application of a PGD driven adaptivity process to a simple 1D problem. The particular aspect of this implementation is that we simultaneously seek two complementary PGD approximations, one compatible and one equilibrated, which we will use to bound their error (as in ) and also to drive the adaptivity process (in the physical and in the parameter space).

The governing equations that describe the problem being considered are presented for the case of a bar divided in two sections, followed by a discussion on error assessment strategies to evaluate the solutions obtained. We proceed to explain the steps used to obtain approximated solutions, first for the finite element method and later using the proper generalized decomposition. We present the parametric form for the approximated solutions and a form of assessing the PGD solutions error, this time specific for each parameter. We apply this error measurement strategy to obtain a novel error indicator which can drive both h- and p- adaptivity processes.

## Governing differential equations

We consider a simple problem for which an analytic solution can be obtained: a linear elastic straight bar divided in $$n_b$$ sections, each section with a length ($$\gamma$$), an uniform axial stiffness (EA) and an uniform elastic support (k), connected in sequence, which form a one dimensional lattice structure. Note that when the stiffness of the support is zero, the solution of this problem is trivial.

Although arbitrary imposed displacements or applied forces may be considered at the nodes that connect the different sections, we restrict our study to the problem where the displacement ($$\Delta$$) is fixed at the start of the first bar and a non-zero force (P) is considered at the free end of the bar, as presented in Fig. 1.

We want to obtain a solution for this problem, expressed in terms of either the displacements (u) or of the axial forces (N), that is characterized by the values of $$\gamma$$, EA and k for each section, plus the value of the imposed displacement and of the applied force. This results in, at most, $$(3\times n_b) + 2$$ parameters.

We know that the strain in an arbitrary point at the bar is

\begin{aligned} \varepsilon =\frac{\mathrm {d} u}{\mathrm {d} x}. \end{aligned}
(1)

The corresponding axial force is

\begin{aligned} N = E A \, \varepsilon , \end{aligned}
(2)

where E is the Young’s modulus at the specified point and A its cross section area. Assuming that the force transmitted by the support is $$F = ku$$, with k being the elastic stiffness of the support, and knowing that this force has to be balanced by the axial force, we have:

\begin{aligned} \frac{\mathrm {d} N}{\mathrm {d} x} - F = 0. \end{aligned}
(3)

The exact solution of the problem has to satisfy:

\begin{aligned} EA\frac{\mathrm {d} ^2u}{\mathrm {d} x^2}- F =0 , \end{aligned}
(4)

subjected to

\begin{aligned} \left\{ \begin{array}{lll} u |_{x=0} &{}=&{} \Delta ;\\ EA \frac{\mathrm {d} u}{\mathrm {d} x} \big |_{x=L} &{}=&{} P. \end{array}\right. \end{aligned}
(5)

Notice that for a bar divided in multiple sections we will need to impose the continuity of displacements and of the corresponding axial force in the $$n_b-1$$ connections nodes of the sections.

We can also write the solution for this problem in terms of the axial force, considering first equilibrium and then compatibility. We know that for a given axial force distribution, equilibrium will be satisfied if relation (3) is true, implying that:

\begin{aligned} u = \frac{1}{k}\frac{\mathrm {d}N}{\mathrm {d}x}. \end{aligned}
(6)

Compatibility and the constitutive relations require that the strain corresponding to this displacement has to be equal to the strain associated with the axial force, such that:

\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}x}\left( \frac{1}{k}\frac{\mathrm {d}N}{\mathrm {d}x}\right) = \frac{N}{EA}. \end{aligned}
(7)

If we write the problem in terms of the axial force, which has to be continuous, it becomes

\begin{aligned} \frac{1}{k}\frac{\mathrm {d} ^2N}{\mathrm {d} x^2}-\frac{N}{EA} =0, \end{aligned}
(8)

subjected to

\begin{aligned} \left\{ \begin{array}{l} \frac{1}{k}\frac{{\mathrm {d}} N}{{\mathrm {d}} x}\big |_{x=0} \,= \, \Delta ; \\ N \big |_{x=L} \,=\, P. \end{array}\right. \end{aligned}
(9)

Again, we need to impose the continuity of the axial forces and of the corresponding displacements at the connection nodes of a bar divided in multiple sections.

## Error assessment measurements

In this section we discuss two possible ways to compute the error for approximate solutions. This is made for an arbitrary problem with domain $$\Omega$$ and boundary $$\Gamma$$, which for the bar that we are considering are is its length and the endpoints, respectively. We will first talk about the error in energy and later of the energy of the error. After a brief explanation, these errors indicators will be applied to our problem.

Both the error in energy and the energy of the error can work as global estimators of the solution’s behavior, providing error bounds.

### Error in Energy

Following , the potential energy $$\Pi$$ of a mechanical system can be defined as a function of a kinematically admissible displacement u, such that:

\begin{aligned} \Pi (u) = {\mathcal {U}}(u) - {\mathcal {V}}(u) = \int _{\Omega }{\mathcal {W}}(u) \mathrm {d} \Omega - {\mathcal {V}}(u), \end{aligned}
(10)

with $${\mathcal {U}}$$ describing the total strain energy, $${\mathcal {W}}$$ the strain energy density, and $${\mathcal {V}}$$ is the work done by the applied forces, defined as:

\begin{aligned} {\mathcal {V}}(u) = \int _{\Omega }\bar{{b}}^T u \, \mathrm {d} \Omega + \int _{\Gamma _t}\bar{{t}}^T u \, \mathrm {d} \Gamma , \end{aligned}
(11)

where $$\bar{{b}}$$ are the body forces, $$\bar{{t}}$$ the boundary traction and $${\Gamma _t}$$ the static boundary. The complementary potential energy $$\Pi ^{c}$$ for a statically admissible stress field N can be written in a similar way as

\begin{aligned} \Pi ^{c}(N) = {\mathcal {U}}^{c}(N) - {\mathcal {V}}^{c}(N) = \int _{\Omega }{\mathcal {W}}^{c}(N) \mathrm {d} \Omega - {\mathcal {V}}^{c}(N), \end{aligned}
(12)

where $${\mathcal {U}}^{c}$$ is the complementary strain energy, $${\mathcal {W}}^{c}$$ the complementary strain energy density, and $${\mathcal {V}}^{c}$$ is the work done by the imposed displacements. We can define $${\mathcal {V}}^{c}$$ as

\begin{aligned} {\mathcal {V}}^{c}(N) = \int _{\Gamma _u} ({\mathcal {N}}^T N)^T {\tilde{u}} \, \mathrm {d} \Gamma , \end{aligned}
(13)

with $${\Gamma _u}$$ representing the kinematic boundary, $${\mathcal {N}}$$ the boundary operator, which for the 1D problem is $$\pm 1$$, and $${\tilde{u}}$$ the displacement at the boundary. Considering linear elastic constitutive relations and that the possible influence of initial strains is excluded, we can write the strain energy density as:

\begin{aligned} {\mathcal {W}}(u) = \frac{1}{2} \left( u'^T {EA} \ u' + u^T k \ u \right) , \end{aligned}
(14)

and the complementary strain energy density:

\begin{aligned} {\mathcal {W}}^{c}(N) = \frac{1}{2} \left( N^T {\frac{1}{EA}} \, N + N'^T \frac{1}{k} N' \right) . \end{aligned}
(15)

We can use the dual solutions to write the average values for the total potential energies and the strain energies, such that:

\begin{aligned} \Pi _a = \frac{1}{2} \bigg ( \Pi (u) - \Pi ^c(N) \bigg );\qquad {\mathcal {U}}_a = \frac{1}{2} \bigg ( {\mathcal {U}}(u) - {\mathcal {U}}^c(N) \bigg ). \end{aligned}
(16)

The sum of the potential energy and the complementary potential energy is zero for the exact solution . For force driven or displacement driven problems, such as the one that we will consider, the error of each approximate solution is bounded by the sum of their energies. Compatible solutions for force-driven problems have the total energy equal to minus the strain energy, which is a negative value. We choose to work with the symmetric of the total energy, which is positive and converges from bellow, and can be used as an error measure. On the other hand, compatible solutions for displacement-driven problems will present a total energy which is always equal to the strain energy, therefore converging from above. Equilibrium solutions will have an analogous opposite behavior, where force-driven problems will converge from above and displacement-driven from bellow (More about this in ).

### Energy of the error

We can compute the exact energy of the error of a compatible or of an equilibrium solution, respectively as:

\begin{aligned} \epsilon _k^2= & {} \int \left( (N_k-N)(\varepsilon _k-\varepsilon )+(F_k-F)(u_k-u) \right) \, \mathrm {d} x; \end{aligned}
(17)
\begin{aligned} \epsilon _s^2= & {} \int \left( (N_s-N)(\varepsilon _s-\varepsilon )+(F_s-F)(u_s-u) \right) \, \mathrm {d} x, \end{aligned}
(18)

where the subscript k indicates solutions that come from kinematically admissible displacements, s solutions from statically admissible axial forces and the lack of a subscript indicates the exact solution. An upper bound of both values is :

\begin{aligned} \epsilon ^2 = \int \left( (N_k-N_s)(\varepsilon _k-\varepsilon _s)+(F_k-F_s)(u_k-u_s) \right) \, \mathrm {d} x. \end{aligned}
(19)

Unlike the energies of the solutions, this expression is applicable to any type of problem (force driven, displacement driven or mixed). Therefore, from now on we always use the energy of the error instead of the error in an energy.

We call the integrand in (19) an error density, defined as:

\begin{aligned} \rho = (N_k-N_s)(\varepsilon _k-\varepsilon _s)+(F_k-F_s)(u_k-u_s). \end{aligned}
(20)

Therefore, we can rewrite Eq. (19) as:

\begin{aligned} \epsilon ^2 = \int \rho \; \mathrm {d}x. \end{aligned}
(21)

It can be shown that :

\begin{aligned} \epsilon ^2 \ge \epsilon ^2_k; \quad \text {and} \quad \epsilon ^2 \ge \epsilon ^2_s. \end{aligned}
(22)

## Compatible and equilibrated finite elements approximations

For our problem, any general function $$f(x)\in {\mathcal {H}}^1$$ (continuous and differentiable) that satisfies $$f(0)=\Delta$$ is a compatible displacement field, with a normal force distribution $$N=u'EA$$. Also, any general function $$g(x)\in {\mathcal {H}}^1$$ is an equilibrated normal force distribution, provided $$g(L)=P$$, with a displacement field $$u=g'/k$$.

When considering the displacement field, we can write the bilinear form of the strain energy product as:

\begin{aligned} a_k(u_{\alpha },u_{\beta }) = \int _0^{L} \left( k u_{\alpha }u_{\beta } + EA u'_{\alpha }u'_{\beta } \right) \mathrm {d}x, \end{aligned}
(23)

and the work of the concentrated force is

\begin{aligned} b_k(u_{\beta }) = P \ u_{\beta }(L), \end{aligned}
(24)

where $$a_k(u_{\alpha },u_{\beta })$$ and $$b_k(u_{\beta })$$ are the bilinear and linear form products, corresponding to compatible (kinematically admissible) solutions.

For the equilibrated solution (statically admissible), we will have:

\begin{aligned} a_s(N_{\alpha },N_{\beta }) = \int _0^{L} \left( \frac{1}{k} N'_{\alpha }N'_{\beta } + \frac{1}{EA} N_{\alpha }N_{\beta } \right) \mathrm {d}x, \end{aligned}
(25)

and the work of the imposed displacement is

\begin{aligned} b_s(N_{\beta }) = \Delta \ N_{\beta }(0). \end{aligned}
(26)

It is also possible to consider a mixed form

\begin{aligned} a_m(u_{\alpha }, N_{\beta }) = \int _0^{L} \left( u_{\alpha }N'_{\beta } + u'_{\alpha }N_{\beta } \right) \mathrm {d}x. \end{aligned}
(27)

We use a Galerkin approach to determine the approximate solutions. Assuming a polynomial degree of approximation p, we have: (28)

where vectors $$\hat{\mathbf {u}}$$ and $$\hat{\mathbf {N}}$$ represent the coefficients of the approximations for the displacements and axial forces, respectively, and $$\varvec{\phi }(x)$$ are the polynomial basis.

Substituting (28) into (23) and (25) will lead to the definition of the stiffness and flexibility matrices, $$\varvec{{\mathcal {K}}}$$ and $$\varvec{{\mathcal {F}}}$$, respectively,

\begin{aligned} \varvec{{\mathcal {K}}}= & {} \int _0^L \left( k \, \varvec{\phi }(x) \varvec{\phi }^T\!(x)+EA\;\varvec{\phi '}(x)\varvec{\phi '}^{T}\!(x) \right) \mathrm {d}x; \end{aligned}
(29)
\begin{aligned} \varvec{{\mathcal {F}}}= & {} \int _0^L \left( \frac{1}{k} \varvec{\phi '}(x)\varvec{\phi '}^T\!(x)+\frac{1}{EA}\varvec{\phi }(x) \, \varvec{\phi }^T\!(x) \right) \mathrm {d}x. \end{aligned}
(30)

We can define a matrix that projects the equilibrated solutions onto the compatible ones, as in (27), such that:

\begin{aligned} \varvec{{\mathcal {S}}} = \int _0^L (\varvec{\phi }(x) \varvec{\phi '}^T\!(x) + \varvec{\phi '}(x) \varvec{\phi }^T\!(x) ) \, \mathrm {d}x . \end{aligned}
(31)

The vectors of applied forces and imposed displacements are obtained substituting (28) into (24) and (26), so that: (32)

The resulting finite element compatible and equilibrated systems are, respectively (33)

We can obtain the finite element approximation for the displacements u and axial forces N by solving the equations in (33) and using the results at Eqs. (28).

## Compatible and equilibrated PGD approximations

A PGD formulation is going to be used to obtain the approximations of the solutions of each complementary problem, as in . We define $$\varvec{\mu }$$, a vector of $${\mathcal {D}}$$ parameters $$\mu _1,\mu _2,\dots ,\mu _{{\mathcal {D}}}$$, each defined in its own domain $$\Omega _{\mu _i} \subset {\mathbb {R}}$$, with $$i=1,2,\dots ,{\mathcal {D}}$$. Then, the vector of parameters $$\varvec{\mu }$$ is defined so that $$\varvec{\mu } \subset \Omega _{\mu } = \Omega _{\mu _1} \otimes \Omega _{\mu _2} \otimes \dots \otimes \Omega _{\mu _{{\mathcal {D}}}} \subset {\mathbb {R}}^{{\mathcal {D}}}$$. Each solution will depend on the $${\mathcal {D}}$$ parameters and will be approximated as a sum of $${\mathcal {N}}$$ modes of $${\mathcal {D}}$$ independent functions of each variable. We can write a general PGD approximated solution as:

\begin{aligned} f_{\textit{pgd}}(x,\varvec{\mu }) = \sum _{d=0}^p \phi _d(x){\hat{f}}_{d_{\textit{pgd}}}(\varvec{\mu }) = \varvec{\phi }(x) \hat{\mathbf {f}}_{\textit{pgd}}(\varvec{\mu }), \end{aligned}
(34)

where,

\begin{aligned} \hat{\mathbf {f}}_{\textit{pgd}}(\varvec{\mu }) = \sum _{m=1}^{{\mathcal {N}}} \bar{\mathbf {f}}^m \prod _{j=1}^{{{\mathcal {D}}}} F_{j}^m(\mu _j), \end{aligned}
(35)

with $$\bar{\mathbf {f}}$$ representing the coefficients of the approximation in the physical space. A linear combination of polynomials of degree $$p_j$$ is used to represent the function:

\begin{aligned} F_{j}^m(\mu _j) = \sum _{d_j=0}^{p_j}{\Phi }_{d_j}(\mu _j) {\hat{F}}_{j,d_j}^m, \end{aligned}
(36)

which is a generalization of (28) where $${\hat{F}}$$ represents the coefficients of the approximation in the parameters.

A fixed point iteration is used to determine the terms in each product, or more precisely, the coefficients $${\hat{F}}$$ and $$\bar{\mathbf {f}}$$. This iteration procedure will try to minimize the extended Galerkin residual of (23) or (25). For a generic term $$n>1$$, $$\tilde{\mathbf {f}}$$, is written as

\begin{aligned} \tilde{\mathbf {f}} = \sum _{m=1}^{n-1} \bar{\mathbf {f}}^m \prod _{j=1}^{{{\mathcal {D}}}} F_{i}^m(\mu _j) + \bar{\mathbf {f}}^n\prod _{j=1}^{{{\mathcal {D}}}} F_{j}^n(\mu _j). \end{aligned}
(37)

Defining

\begin{aligned} \tilde{\mathbf {f}}^{n-1} = \sum _{m=1}^{n-1} \bar{\mathbf {f}}^m \prod _{j=1}^{{{\mathcal {D}}}} F_{j}^m(\mu _j) \quad \text {and} \quad \Delta \tilde{\mathbf {f}}^{n} = \bar{\mathbf {f}}^n\prod _{j=1}^{{{\mathcal {D}}}} F_{j}^n(\mu _j), \end{aligned}
(38)

we can further simply $$\tilde{\mathbf {f}}$$ as:

\begin{aligned} \tilde{\mathbf {f}} = \tilde{\mathbf {f}}^{n-1} + \Delta \tilde{\mathbf {f}}^{n}. \end{aligned}
(39)

The extended residual is obtained by considering the integration domain of (23) or (25) including all elements and all parameters. We want to impose that

\begin{aligned} a(\tilde{\mathbf {f}},\mathbf {g}) - b(\mathbf {g}) = 0 \quad \forall \mathbf {g}, \end{aligned}
(40)

or

\begin{aligned} a(\Delta \tilde{\mathbf {f}}^n,\mathbf {g}) - (b(\mathbf {g})-a({\tilde{f}}^{n-1},\mathbf {g})) = 0 \quad \forall \mathbf {g}. \end{aligned}
(41)

In order to compute a specific function, a fixed point iteration scheme is assumed with all other functions being constants.

In the fixed point scheme only the physical space or one variable are considered at a time, e.g. $$\mu _{\alpha }$$, so that the test space (the $$\mathbf {g}$$’s) are restricted to the basis used for that variable.

To determine the values of $$\hat{\mathbf {f}}^n$$ for all elements, which define the solution in the physical space, we solve:

\begin{aligned} \tilde{\mathbf {f}} = \tilde{\mathbf {f}}^{n-1} + \bar{\mathbf {f}}^n\prod _{j=1}^{{{\mathcal {D}}}} F_{j}^n(\mu _j). \end{aligned}
(42)

For a specific function $$F_{\alpha }^n(\mu _{\alpha })$$ the iteration procedure determines the coefficients $${{\hat{F}}}_{\alpha }^n$$ that satisfy:

\begin{aligned} \tilde{\mathbf {f}} = \tilde{\mathbf {f}}^{n-1} + \bar{\mathbf {f}}^n\; F_{\alpha }^n(\mu _{\alpha }) \prod _{j=1;j\ne \alpha }^{{{\mathcal {D}}}} F_{j}^n(\mu _j). \end{aligned}
(43)

Finally, the parametric PGD approximation in terms of the displacements is:

\begin{aligned} \hat{\mathbf {u}}_{\textit{pgd}}(\varvec{\mu }) = \sum _{m=1}^{{\mathcal {N}}} \bar{\mathbf {u}}^m \prod _{j=1}^{{{\mathcal {D}}}} U_{j}^m(\mu _j), \end{aligned}
(44)

and in terms of axial forces:

\begin{aligned} \hat{\mathbf {N}}_{\textit{pgd}}(\varvec{\mu }) = \sum _{m=1}^{{\mathcal {N}}} \bar{\mathbf {N}}^m \prod _{j=1}^{{{\mathcal {D}}}} {\tilde{N}}_{j}^m(\mu _j). \end{aligned}
(45)

As is typical for the PGD method, the greater the number of terms in the sum, the best should be the approximation achieved. The converge criteria adopted here is the difference between the strain energy of the current pair of solutions (compatible and equilibrium). The process is stopped when either the error is lower than a required tolerance, or when the process stagnates.

When non homogeneous boundary conditions are considered ($$u(0)\ne 0$$ and $$N(L)\ne 0$$, respectively for the compatible and for the equilibrium models) their effect are taken into account in the first term of the sum in (36), so that for the other terms the boundary conditions are homogeneous.

## Parametric problem and approximated error measurements

We defined $$\varvec{\mu }$$ as a vector of $${\mathcal {D}}$$ parameters $$\mu _1,\mu _2,\dots ,\mu _{{\mathcal {D}}}$$. Since for our specific example the integrals in the physical space are already considered in the system matrices presented in section 2, we can write the parametric finite element approximations of the potential energies directly in terms of the stiffness and flexibility matrices and of the vectors of the applied force and imposed displacement, such that:

\begin{aligned} \Pi _{\textit{fem}}(\hat{\mathbf{u }}(\varvec{\mu }))= \frac{1}{2} \hat{\mathbf{u }}^T\!(\varvec{\mu }) \, \mathcal {K}(\varvec{\mu }) \, \hat{\mathbf{u }}(\varvec{\mu }) - \hat{\mathbf{u }}^T\!(\varvec{\mu }) \mathbf{Q} , \end{aligned}
(46)

and

\begin{aligned} \Pi ^c_{\textit{fem}}(\hat{\mathbf{N }}(\varvec{\mu })) = \frac{1}{2} \hat{\mathbf{N }}^T\!(\varvec{\mu }) \mathcal {F}(\varvec{\mu }) \, \hat{\mathbf{N }}(\varvec{\mu }) - \hat{\mathbf{N }}^T\!(\varvec{\mu }) \mathbf{e} . \end{aligned}
(47)

We apply the same concept to obtain the bound of the energy of the error, so that:

\begin{aligned} \epsilon _{\textit{fem}}^2(\varvec{\mu }) = \hat{\mathbf{u }}^T\!(\varvec{\mu }) \, \mathcal {K}(\varvec{\mu }) \, \hat{\mathbf{u }}(\varvec{\mu }) + \hat{\mathbf{N }}^T\!(\varvec{\mu }) \mathcal {F}(\varvec{\mu }) \, \hat{\mathbf{N }}(\varvec{\mu }) - 2\, \hat{\mathbf{N }}^T\!\!(\varvec{\mu }) \mathcal {S} \, \hat{\mathbf{u }}(\varvec{\mu }), \end{aligned}
(48)

or we can simply define it in terms of the energy density as:

\begin{aligned} \epsilon _{\textit{fem}}^2(\varvec{\mu }) = \int \rho _{\textit{fem}}(\varvec{\mu }) \, \mathrm {d}x. \end{aligned}
(49)

When considering the error measurements in the parametric form, we will obviously have a different result for each combination of parameters we use. Therefore, it is necessary to have an additional way to determine the quality of the solutions obtained, that also considers the effects of the parametric domain. One simple solution is to integrate the error measurements obtained in the parametric domain, resulting in a single value that accounts for the quality of all possible solutions. For the energy of the error in particular, it is worth noting that if the integral in space is extended to the parameters, the bounding properties still hold, so that:

\begin{aligned} \begin{aligned}&\int _{\Omega _{\varvec{\mu }}} \epsilon _{\textit{fem}}^2(\varvec{\mu }) \, \mathrm {d} \Omega _{\varvec{\mu }} \ge \int _{\Omega _{\varvec{\mu }}} \epsilon _{k_{\textit{fem}}}^2(\varvec{\mu }) \, \mathrm {d} \Omega _{\varvec{\mu }}; \text { or} \\&\int _{\Omega _{\varvec{\mu }}} \epsilon _{\textit{fem}}^2(\varvec{\mu }) \, \mathrm {d} \Omega _{\varvec{\mu }} \ge \int _{\Omega _{\varvec{\mu }}} \epsilon _{s_{\textit{fem}}}^2(\varvec{\mu }) \, \mathrm {d} \Omega _{\varvec{\mu }}. \end{aligned} \end{aligned}
(50)

We can obtain similar approximated error measures using the PGD approximations. We need only to substitute the finite element approximated displacements $$\hat{\mathbf {u}}$$ and axial forces $$\hat{\mathbf {N}}$$ for their PGD counterparts $$\hat{\mathbf {u}}_{\textit{pgd}}$$ and $$\hat{\mathbf {N}}_{\textit{pgd}}$$.

## Practical aspects of the discretization of the physical domain

A global coordinate system ($$x \in [0,L]$$) is used, where the coordinates of the initial and final nodes of section b are $$\gamma _{b-1}$$ and $$\gamma _b$$, with $$\gamma _0=0$$ and $$\gamma _{n_b}=L$$. Therefore, the geometric parameters correspond to the $$\gamma _i$$, with $$i=1, \ldots , (n_b-1)$$.

Each section, which is divided in $$n_{e[b]}$$ elements, uses an intermediate coordinate system ($${{\bar{x}}}_{[b]} \in [0,1]$$). The coordinates of the initial and final nodes of element e are $${\bar{\gamma }}_{[b]e-1}$$ and $${\bar{\gamma }}_{[b]e}$$, with $${\bar{\gamma }}_{[b]0}=0$$ and $${\bar{\gamma }}_{[b]n_e}=1$$. In the following, we replace $${\bar{\gamma }}_{[b]e}$$ with $${\bar{\gamma }}_e$$, $${\bar{x}}_{[b]e}$$ with $${\bar{x}}_{e}$$ and $${\bar{x}}_{[b]}$$ with $${\bar{x}}$$, unless the reference to the section of the element is necessary.

In the elements domain, a local coordinate system is used ($$\xi \in [-1,1]$$), which is linearly mapped to $${\bar{x}}$$, which is then linearly mapped to x.

Using $$\phi _0$$ and $$\phi _1$$ to represent the linear interpolation functions associated with the end nodes of an interval we have:

\begin{aligned} {\bar{x}}(\xi ) = \phi _0(\xi )\, {\bar{\gamma }}_{e-1} + \phi _1(\xi )\, {\bar{\gamma }}_{e}\quad \text {and}\quad x({\bar{x}}) = \phi _0({\bar{x}})\, \gamma _{b-1} + \phi _1({\bar{x}})\, \gamma _{b}. \end{aligned}
(51)

These transformations are illustrated in Fig. 2.

Each section may be divided in several finite elements. Normally, for problems with fixed geometry, only one mapping is used, from the frame of element ($$\xi$$) to the global coordinates (x).

Since both mappings are linear the Jacobian of each transformation is constant, and the Jacobian of the double mapping, J, is equal to their product. The derivative of an arbitrary function, f, is obtained as

\begin{aligned} \frac{\mathrm {d}f}{\mathrm {d} x} = \frac{\mathrm {d}f}{\mathrm {d} \xi } \frac{\mathrm {d}\xi }{\mathrm {d} {\bar{x}}} \frac{\mathrm {d}{\bar{x}}}{\mathrm {d} x} = \frac{2}{({\bar{\gamma }}_{e}-{\bar{\gamma }}_{e-1})} \frac{1}{(\gamma _{b}-\gamma _{b-1})} \frac{\mathrm {d}f}{\mathrm {d} \xi } = \frac{1}{J_{e}}\frac{\mathrm {d}f}{\mathrm {d} \xi }, \end{aligned}
(52)

and its integral is

\begin{aligned} \int f\; \mathrm {d}x = \int f J_{e} \; \mathrm {d}\xi . \end{aligned}
(53)

Our basis, in local coordinates, are obtained by combining the linear interpolation functions with the Legendre polynomials $${{{\mathcal {L}}}}_i(\xi )$$:

\begin{aligned} {\left\{ \begin{array}{ll} \phi _0(\xi ) = \frac{1-\xi }{2}, &{} \\ \phi _1(\xi ) = \frac{1+\xi }{2}, &{} \\ \phi _i(\xi ) = \phi _0(\xi ) \;\phi _1(\xi ) \; {{{\mathcal {L}}}}_{i-2}(\xi ), &{} \text {for} \; i > 1 \end{array}\right. } \end{aligned}
(54)

We can re-write, for example, the stiffness and flexibility matrices in the local frame of the elements, by applying (52) and (53):

\begin{aligned} {\mathcal {K}}_{e_{i,j}}= & {} \int _{-1}^1 \left( k\; \phi _i\phi _j J_{e} +EA\; \phi '_i\phi '_j \frac{1}{J_{e}} \right) \; \mathrm {d}\xi ; \end{aligned}
(55)
\begin{aligned} {\mathcal {F}}_{e_{i,j}}= & {} \int _{-1}^1 \left( \frac{1}{k} \phi '_i\phi '_j \frac{1}{J_{e}}+\frac{1}{EA}\phi _i\phi _j J_{e} \right) \; \mathrm {d}\xi . \end{aligned}
(56)

The complete system of equations for each problem combines the assembled stiffness (or flexibility) matrices of each element of each section, together with additional constraints that impose displacement continuity (5) or equilibrium of normal forces (9) between sections. This corresponds to impose:

• Continuity of displacements and (weak) equilibrium of nodal forces, or

• Equilibrium of normal forces and (weak) continuity of the displacements.

For the compatible formulation, to have a continuous displacement at the node between sections b and $$b+1$$, we must impose that:

\begin{aligned} u_{[b]n_e}(1)= & {} u_{[b+1]1}(-1) \Longrightarrow \nonumber \\&\quad \sum _{i=0}^{p_e} \phi _i(1) {\hat{u}}_{[b]n_e,i} = \sum _{i=0}^{p_e} \phi _i(-1) {\hat{u}}_{[b+1]1,i} \Longrightarrow \nonumber \\&\quad {\hat{u}}_{[b]n_e,1} ={\hat{u}}_{[b+1]1,0}. \end{aligned}
(57)

This condition can be expressed in matrix form as

\begin{aligned} \mathbf {L}_{end} \mathbf {{\hat{u}}}_b - \mathbf {L}_{start} \mathbf {{\hat{u}}}_{b+1} = 0. \end{aligned}
(58)

The transpose of these matrices reflects the effect of the force at the node (transmitted between sections b and $$b+1$$) in the equations of weak equilibrium of the adjacent sections.

For the equilibrated formulation, the process is complementary: we impose continuity of the nodal forces and the displacement of the interface is accounted for in the weak compatibility conditions of each adjacent bar. The matrices involved are the same.

Note that, in case we wanted to consider a concentrated force applied at a node, or a relative imposed displacement, the equations need to be modified accordingly.

In the practical implementation of the problem, we apply these mapping considerations for all the equations described so far, but we will omit them from now on to reduce repetition of the text.

## Characterization of the test case

To apply the concepts presented here, we consider the problem shown in Fig. 4, with the following characteristics:

• The bar is composed of two sections, with a total summed length L of one;

• Each section has its own support stiffness $$k_b$$;

• The axial stiffness EA for the first section has a unit value;

• The axial stiffness of the second section is equal to $$\beta$$;

• The length of the first section is equal to $$\gamma$$.

The parameters for the PGD with their respective limits are:

\begin{aligned} {\left\{ \begin{array}{ll} \mu _1 \equiv k_1, &{} \text { with } k_1\in [0.1,10]; \\ \mu _2 \equiv k_2, &{} \text { with } k_2\in [0.1,10]; \\ \mu _3 \equiv \beta , &{} \text { with } \beta \in [0.1,10]; \\ \mu _4 \equiv \gamma , &{} \text { with } \gamma \in [0.4,0.6]. \end{array}\right. } \end{aligned}
(59)

The calculations are performed considering an applied force at the free end ($$P_{x=L}=1$$, $$\Delta _{x=0}=0$$).

Unless otherwise stated, the polynomial approximations are linear for all parameters. Notice that these degrees can lead to solutions which are not accurate, but they serve the purpose of this paper, which is to identify regions where the simulations can be improved.

The tolerances $$\tau$$ adopted for the fixed point iteration scheme and the PGD enrichment process are $$\tau _{fix} = 1 \times 10^{-3}$$ and $$\tau _{\textit{pgd}} = 1 \times 10^{-6}$$, respectively, with the maximum amount of iterations being 3 for the fixed point and 3001 for the PGD. Setting such low number of iterations for the fixed point scheme may lead to more terms for the convergence of the PGD, but drastically decreases the computational time, which is a phenomenon also observed in [11, 12], but should not be generalized for all problems, as this behavior is case dependent.

To further improve the convergence of the solutions, the limits for the parameters $$k_1$$, $$k_2$$ and $$\beta$$ were mapped to a logarithmic scale $$k_1=10^{{\hat{k}}_1}$$, $$k_2=10^{{\hat{k}}_2}$$ and $$\beta =10^{\hat{\beta }}$$, with ranges $${\hat{k}}_1\in [-1,1]$$, $${\hat{k}}_2\in [-1,1]$$ and $$\hat{\beta }\in [-1,1]$$. This mapping does not affect the PGD approximations, serving only to diminish the influence of the edges in the solutions. Additional simulations must be performed with different limits to assess the effect of this modification.

We now present solutions in terms of the error in energy and the energy of the error. The simulations were performed using quadratic approximations in space, in order to better visualize the bounds of the solutions. The bounding characteristics discussed in the previous section are observed in Fig. 5, which presents selected results for our problem. The left part of the Figure shows the integral of the energies of the PGD and FEM approximations, while the right part shows the integral of the energy of the error. The energies for the FEM solutions are very close to the exact solutions and are represented as details in the figure. Notice that the exact energy of the error is zero and therefore it is not shown in the figure while the exact integral of the error in energy is greater than zero and is presented in the left figure with a black line. The energies of the compatible and equilibrated solutions will have a convergence from above and bellow, respectively, when the problem is force-driven, which is the behavior observed. On the other hand, for the bound of the energy of the error we have an upper bound, meaning that it should be always greater than the values found for the exact solutions. From the figure it is also possible to see that, as the number of PGD modes increases, the two complementary PGD solutions converge to the approximation of the FEM, always preserving their bounding characteristics.

Figure 6 shows the relative difference between the approximations obtained from the PGD and FEM models. The integral of the energies of the PGD model will converge to the FEM energy as the number of modes in the solution increases. The results obtained for the integral of the energy of the solutions and the energy of the error are virtually the same. This sort of behavior is expected for force or displacement driven problem, due to the orthogonality of the FEM solutions, but is not the case for mixed problems.

Figure 7 shows the behavior of the relative difference between the integral of the energies of the PGD and FEM approximations and the average potential or strain energies as the degree or the number of elements per section used to obtain the solutions increase. The solutions behave as expected, with better results being achieved as the degree or the number of elements increases.

Notice that for higher degree approximations, the solution starts to lose its convergence rate and also the initial solutions are worse than the ones obtained with smaller degrees. The decrease of the convergence rate is caused by a limit in the number of PGD modes due to the tolerance of the simulations. An indication that the solution is being constrained by the tolerance is that the number of PGD modes for a higher degree or number of elements may decrease. This can be seen for the solution with $$p = 5$$ and $$h = 4$$ and $$h = 5$$. Decreasing the tolerance can recover the rate of convergence, although this will not fix the poor results for the initial modes. We believe these poor results are a consequence of the complexity of the higher degree solutions. This is suggested by the fact that this behavior is not observed when increasing the number of elements of the solution. The results in terms of the error in energy and energy of the error are again the same and were omitted from Fig. 7.

The error in energy is simpler to visualize, as it is based in the energy of the solutions which has a physical meaning. Also, each compatible and equilibrated solution has its own energy, allowing for individual analyses of the error. In the other hand, the bounds that are seen in the error in energy are limited to situations when we have force or displacement driven problems, which is not the case for the energy of the error. That being said, from this point on we will base or solutions solely in the energy of the error, as it provides error bounds for general problems.

Adaptivity mesh techniques are designed to improve approximated solutions by modifying either the element mesh configuration or their degrees of freedom . One of the key points in the mesh adaptivity process is the proper definition of the relation between the solution convergence and the element size and/or degree of the approximation functions. This convergence dependency is expected to be precise when a smooth solution is being studied and the mesh that represents the solution domain has a large enough number of elements.

We can usually divide the adaptivity process in three categories: the modification of the size of the elements in a mesh (h-adaptivity); the modification of the approximation degree for the functions (p-adaptivity); and a combination of both these process (hp-adaptivity). This paper explores the aspects of the h- and p- adaptivities, but not the combination of both.

We want to obtain an adaptivity indicator that is capable of capturing the best regions to refine, without differentiating between the physical or the parametric space. To achieve this, we collect in variable $$\chi$$ both the parameters and the physical space. This means that, from this point on, a reference to the parameters $$\chi$$ of the problem may be referring to a material parameter $$\mu$$ or to the parameter x of the physical space.

The solutions presented will be shown in terms of the number of elements $$n_h$$ or the degrees of the polynomial approximations p. Our specific problem has a total number of parameters $$n_{\chi }$$ equal to five: four $$\mu$$’s and the physical space, which we divide in the two sections, $$b_1$$ and $$b_2$$.

A question that arises is how do we use the complementary solutions to decide where to refine the model. We know that the integral of the energy of the error covers the whole parameters and physical space domain and therefore it is always the same, no matter in which order we performs the integrals. The idea was, then, to look at the effects of each parameter independently by performing the integral of the error density $$\rho$$ in all parameters but one. If this function is constant, it means that the parameter is not affecting the error bound, indicating that we need to look at the derivatives of the functions. This lead to the idea of working with the derivative of $$\rho$$ with respect to all parameters. After numerical tests to assess how to relate the derivative of the augmented set of parameters $$\chi$$ with the effect of that parameter on the error, the following expression for an error indicator $$\iota$$ was selected:

\begin{aligned} \iota (\chi ) = \int \int (\chi -C(\chi )) \frac{\partial \rho (\xi ,\varvec{\mu })}{\partial \chi } \, \mathrm {d}\Omega \, \mathrm {d}\Omega _{\varvec{\mu }}, \end{aligned}
(60)

where:

\begin{aligned} C(\chi )=\frac{\int \int \chi \, \rho (\xi ,\varvec{\mu }) \,\mathrm {d}\Omega \, \mathrm {d}\Omega _{\varvec{\mu }}}{\int \int \rho (\xi ,\varvec{\mu }) \, \mathrm {d}\Omega \,\mathrm {d}\Omega _{\varvec{\mu }}}. \end{aligned}
(61)

This expression was selected after testing several dimensionally consistent alternatives, including combinations of the derivatives of $$\rho$$ with respect to the parameters. From this study, we concluded that (60) is the alternative that best captures the regions where refinement is required. Note that this expression is not used to control if the process has converged, only where to refine next. We have not yet found a theoretical background that supports its application.

Considering $$\rho$$, which is non negative, as a pseudo mass density, we can interpret $$\iota (\chi )$$ as the first order moment of the derivative of the error with respect to the center of mass of the domain. Recalling that the subscript k indicates solutions that come from kinematically admissible displacements and s that the solutions come from statically admissible axial forces, we can write $$\rho$$ for an element, considering the domain decomposition particularities, as:

\begin{aligned} \begin{aligned} \rho (\xi ,\varvec{\mu }) =&J ((N_k(\xi ,\varvec{\mu })-N_s(\xi ,\varvec{\mu }))(\varepsilon _k(\xi ,\varvec{\mu })-\varepsilon _s(\xi ,\varvec{\mu })) \\&+(F_k(\xi ,\varvec{\mu })-F_s(\xi ,\varvec{\mu }))(u_k(\xi ,\varvec{\mu })-u_s(\xi ,\varvec{\mu }))). \end{aligned} \end{aligned}
(62)

where the components in (62) can we approximated in a similar manner as in (28), so that:

\begin{aligned} \begin{aligned} u_{k}(\xi ,\varvec{\mu })&= \varvec{\phi }(\xi ) \hat{\mathbf {u}}_k(\varvec{\mu });&N_{s}(\xi ,\varvec{\mu })&= \varvec{\phi }(\xi ) \hat{\mathbf {N}}_s(\varvec{\mu }); \\ F_k(\xi ,\varvec{\mu })&= k \, u_k(\xi ,\varvec{\mu });&\varepsilon _s(\xi ,\varvec{\mu })&= \frac{1}{\beta } N_s(\xi ,\varvec{\mu }); \\ \varepsilon _k(\xi ,\varvec{\mu })&=\frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\frac{\mathrm {d} \xi }{\mathrm {d} x}\hat{\mathbf {u}}_k(\varvec{\mu });&F_s(\xi ,\varvec{\mu });&= \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\frac{\mathrm {d} \xi }{\mathrm {d} x} \hat{\mathbf {N}}_s(\varvec{\mu }); \\ N_k(\xi ,\varvec{\mu })&=\beta \,\varepsilon _k(\xi ,\varvec{\mu });&u_s(\xi ,\varvec{\mu })&= \frac{1}{k} \, F_s(\xi ,\varvec{\mu }). \end{aligned} \end{aligned}
(63)

These approximations can be expressed both in terms of FEM or PGD, only needing to set $$\hat{\mathbf {u}}_k$$ and $$\hat{\mathbf {N}}_s$$ accordingly. In order to compute (60) we also need to compute the derivative $$\frac{\partial \rho (\xi ,\varvec{\mu })}{\partial \chi }$$ and, therefore, we need the derivatives of (62). We will have different expressions for derivatives depending on what type of parameter $$\chi$$ is representing.

When $$\chi$$ represents the physical domain parameter x, and assuming that the material parameters k and $$\beta$$ are defined such that $$k = \mu _1$$ and $$\beta = 1$$ if $$x \le \mu _4$$, and $$k = \mu _2$$ and $$\beta = \mu _3$$ if $$x > \mu _4$$ we have:

\begin{aligned} \begin{aligned} \frac{\partial u_k(\xi ,\varvec{\mu })}{\partial x}&= \frac{1}{J} \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi } \hat{\mathbf {u}}_k(\varvec{\xi });&\frac{\partial N_s(\xi ,\varvec{\mu })}{\partial x}&= \frac{1}{J} \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\hat{\mathbf {N}}_s(\varvec{\mu }); \\ \frac{\partial F_k(\xi ,\varvec{\mu }) }{\partial x}&= k \, \frac{1}{J} \frac{\partial u_k(\xi ,\varvec{\mu })}{\partial \xi };&\frac{\partial \varepsilon _s(\xi ,\varvec{\mu }) }{\partial x}&= \frac{1}{\beta } \, \frac{1}{J} \frac{\partial N_s(\xi ,\varvec{\mu })}{\partial \xi }; \\ \frac{\partial \varepsilon _k(\xi ,\varvec{\mu }) }{\partial x}&= \frac{1}{J} \left( \frac{\mathrm {d}^2 \varvec{\phi }(\xi )}{\mathrm {d} \xi ^2} \right) \frac{\partial \xi }{\partial x}\hat{\mathbf {u}}_k(\varvec{\mu });&\frac{\partial F_s(\xi ,\varvec{\mu })}{\partial x}&= \frac{1}{J} \left( \frac{\mathrm {d}^2 \varvec{\phi }(\xi )}{\mathrm {d} \xi ^2} \right) \frac{\partial \xi }{\partial x}\hat{\mathbf {N}}_s(\varvec{\mu }); \\ \frac{\partial N_k(\xi ,\varvec{\mu })}{\partial x}&= \beta \, \frac{1}{J} \frac{\partial \varepsilon _k(\xi ,\varvec{\mu })}{\partial \xi };&\frac{\partial u_s(\xi ,\varvec{\mu })}{\partial x}&= \frac{1}{k} \, \frac{1}{J} \frac{\partial F_s(\xi ,\varvec{\mu })}{\partial x}. \end{aligned} \end{aligned}
(64)

And when $$\chi$$ represents one of the parameters $$\mu _i$$, with $$i=1,2,\ldots ,{\mathcal {D}}$$, we have:

\begin{aligned} \begin{aligned} \frac{\partial u_k(\xi ,\varvec{\mu })}{\partial \mu _i}&= \varvec{\phi }(\xi ) \frac{\partial \hat{\mathbf {u}}_k(\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial N_s(\xi ,\varvec{\mu })}{\partial \mu _i}&= \varvec{\phi }(\xi ) \frac{\partial \hat{\mathbf {N}}_s(\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial F_k(\xi ,\varvec{\mu }) }{\partial \mu _i}&= \frac{\mathrm {d} k}{\mathrm {d}\mu _i}u_k(\xi ,\varvec{\mu }) + k\frac{\partial u_k(\xi ,\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial \varepsilon _s(\xi ,\varvec{\mu }) }{\partial \mu _i}&= \frac{\mathrm {d} \beta ^{-1}}{\mathrm {d}\mu _i}N_s(\xi ,\varvec{\mu }) + \frac{1}{\beta } \frac{\partial N_s(\xi ,\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial \varepsilon _k(\xi ,\varvec{\mu }) }{\partial \mu _i}&= \left( \frac{\mathrm {d} }{\mathrm {d}\mu _i}\frac{\mathrm {d} \xi }{\mathrm {d} x} \right) \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\hat{\mathbf {u}}_k(\varvec{\mu }) + \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\frac{\mathrm {d} \xi }{\mathrm {d} x} \frac{\partial \hat{\mathbf {u}}_k(\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial F_s(\xi ,\varvec{\mu })}{\partial \mu _i}&= \left( \frac{\mathrm {d} }{\mathrm {d}\mu _i}\frac{\mathrm {d} \xi }{\mathrm {d} x} \right) \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\hat{\mathbf {N}}_s(\varvec{\mu }) + \frac{\mathrm {d} \varvec{\phi }(\xi )}{\mathrm {d} \xi }\frac{\mathrm {d} \xi }{\mathrm {d} x}\frac{\partial \hat{\mathbf {N}}_{s}(\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial N_k(\xi ,\varvec{\mu })}{\partial \mu _i}&= \frac{\mathrm {d} \beta }{\mathrm {d}\mu _i}\varepsilon _k(\xi ,\varvec{\mu }) + \beta \frac{\partial \varepsilon _k(\xi ,\varvec{\mu })}{\partial \mu _i}; \\ \frac{\partial u_s(\xi ,\varvec{\mu })}{\partial \mu _i}&= \frac{\mathrm {d} k^{-1}}{\mathrm {d}\mu _i}F_s(\xi ,\varvec{\mu }) + \frac{1}{k} \frac{\partial F_s(\xi ,\varvec{\mu })}{\partial \mu _i}. \end{aligned} \end{aligned}
(65)

Notice that $$\left( \frac{\mathrm {d}}{\mathrm {d}\mu _i}\frac{\mathrm {d} \xi }{\mathrm {d} x} \right) = 0$$ unless $$\mu _i$$ is the geometric parameter $$\mu _4 = \gamma$$. For the derivatives of the PGD approximations, we have:

\begin{aligned} \begin{aligned} \frac{\partial \hat{\mathbf {u}}_{k}(\varvec{\mu })}{\partial \mu _i}&= \sum _{m=1}^{{\mathcal {N}}} \bar{\mathbf {u}}^m\, \frac{\mathrm {d}\, U^m_{i}(\mu _i)}{\mathrm {d}\mu _i} \prod _{j=1; j \ne i}^{{{\mathcal {D}}}} U^m_j(\mu _j) \\ \frac{\partial \hat{\mathbf {N}}_{s}(\varvec{\mu })}{\partial \mu _i}&= \sum _{m=1}^{{\mathcal {N}}} \bar{\mathbf {N}}^m \, \frac{\mathrm {d}\, {\tilde{N}}^m_i(\mu _i)}{\mathrm {d}\mu _i} \prod _{j=1; j \ne i}^{{{\mathcal {D}}}} {\tilde{N}}^m_j(\mu _j) \end{aligned} \end{aligned}
(66)

We can now apply Eqs. (63) and (65) into Eq. (60) to obtain the error indicator $$\iota$$. Notice that by multiplying the derivative by the variable, we are able to keep $$\iota$$ with units that are consistent with the energy of the error $$\epsilon ^2$$.

This section studies the results for p- refinement when using the proposed adaptivity indicator. It works with a simple verification, by identifying the parameter that has the highest value for $$\iota$$ among all the parameters studied and increasing the polynomial degree of that parameter by one, as this parameter is expected to influence the solution error the most.

The adaptivity process using the indicator proposed is compared with the uniform mesh adaptivity and is shown in Fig. 8. Using the adaptivity indicator leads to better results than simply uniformly increasing the degrees of approximation functions, as it chooses the parameter that has the major influence in the solution to improve its degree. The drawback of this process is the need to repeat the simulation for every new degree improved.

One way to determine if the method to drive the adaptivity process is reliable is to determine all the possible solutions for a given set of degrees for the polynomial approximation and verify if the method is capable to accurately define which parameter should have its degree improved.

The following computation process was done:

1. 1

2. 2

Increase the polynomial degree of the parameter by one and compute the error;

3. 3

Return the parameter to the original degree and repeat the previous step for the next parameter, until all the variables are studied;

4. 4

Pick the parameter that leads to the smallest error and permanently increase the polynomial degree of that parameter by one;

5. 5

Go back to step two until the sum of the degrees of the parameter reaches a predetermined value.

The results for different tolerances ($$\tau _{\textit{pgd}}$$) were compared with a reference solution ($$\tau _{ref}$$) which was obtained by testing all possible solutions and choosing the optimal result, using a tolerance of $$1\times 10^{-12}$$. The results can be seen in Fig. 9, where the error tends to stagnate after a certain tolerance is achieved.

The adaptivity process is capable of precisely identify the parameter that causes the most impact in the solution, achieving better results for the error sooner than increasing all the degrees at once. We consider that the adaptivity method works if it is capable of selecting the parameter that required its polynomial degree increased to achieve the smallest error. Therefore, as long as the tolerance is small enough, the adaptivity method proposed is capable of choosing the optimal parameter to be refined.

We now study the simulations results for h-refinement when using the proposed adaptivity indicator. Again, just like for the p-refinement, we use the parameter that gives the highest value $$\iota$$ as the indicator mechanism for the adaptivity process.

One key difference for the h- refinement is that it is also necessary to know where the element should be divided. The most natural place to choose is the middle of the element, but we can instead use the center of gravity of the energy error for that parameter, which is already being computed for the adaptivity indicator. To confirm that the center of gravity of the parameter is the best place to break the element, a simulation with 50 different break points in the interval $$]-1,1\;[$$ of the variable $$\beta$$ was performed and is presented in Fig. 10. It is possible to observe that the point that leads to the real minimum value is really close to the center of gravity of the parameter. This behavior is seen in all others parameters, for different numbers of elements or degrees, leading us to believe that the center of gravity provides a good indication on where to divide the element.

The adaptivity process using the proposed indicator is compared with the uniform mesh adaptivity and is shown in Fig. 11. Using the adaptivity indicator leads to better results than uniformly increasing the number of elements, as it chooses the variable that has the major influence in the solution to divides its elements. Just like for the p- refinement, the drawback of this process is the need to repeat the simulation for every new element division.

We repeat the verification process performed for the p- refinement, determining all possible solutions to the problem for a given sets of elements and verifying if the method is able to accurately define which parameter should have its element divided.

The following computation process was performed:

1. 1.

2. 2.

Divide the element of the parameter and compute the total error;

3. 3.

Return the parameter to the original number of elements and repeat the previous step for the next parameter, until all the variables were studied;

4. 4.

Pick the parameter that results in the smallest error and permanent split the element of that parameter;

5. 5.

Go back to step two until the simulation reaches a predetermined value.

The results for different tolerances ($$\tau _{\textit{pgd}}$$) were compared with a reference solution ($$\tau _{ref}$$) which was obtained by testing all possible solutions and choosing the optimal result, using a tolerance of $$1 \times 10^{-9}$$. These results are presented in Fig. 12, with the total error stagnating after a certain tolerance is achieved.

The adaptivity process is capable of precisely identify the parameter that causes the most impact in the solution, achieving better results for the total error sooner than increasing all the elements at once. We consider the adaptivity method good if it is capable of selecting the parameter that needed its element split to achieve the smallest error. Therefore, just like for the p- refinement, as long as the tolerance is small enough, the adaptivity method proposed is capable of choosing the optimal parameter to be refined.

## Conclusion

This paper defines new strategies to assess the error of a PGD parametric problem. We give a brief explanation on how the dual analysis of error works, defining errors measures in terms of the energy of the error and the error in energy. We proceed by introducing finite element and PGD approximations, focused for the solution of a 1D linear elastic problem, considering the details for the implementation of a geometric parameter, specific the length of the sections of the bar we are studying.

We present several examples to evaluate the behavior the PGD approximations. We compute the integrals of the energy of the error and the potential energies in the parametric domain for both PGD and FEM and compare the results with the exact solutions. The integral of the energy of the error presented itself as a good error measure, as it bounds the solutions without any additional conditions, while the error of the energies requires force or displacement driven problems to obtain bounds of the solutions.

We were able to develop an adaptivity indicator, based on empiric data obtained from several simulations performed. This adaptivity indicator is capable of capturing the optimal parameter (or space region) to be refined, both in terms of p- and h- refinements, leading to lower error values than those obtain with a simple uniform refinement.

Additional work is being done to extend the results obtained to a 2D and 3D framework . We can also extend the results to obtain quantities of interest and bounds of their errors from for the PGD approximations , giving a direct physical meaning to the solutions obtained.

## References

1. 1.

Ammar A. The proper generalized decomposition: a powerful tool for model reduction. Int J Mater Form. 2010;3:89–102.

2. 2.

Ladevèze P, Chamoin L. On the verification of model reduction methods based on the proper generalized decomposition. Comput Methods Appl Mech Eng. 2011;200:2032–47.

3. 3.

Ammar A, Chinesta F, Diez P, Huerta A. An error estimator for separated representations of highly multidimensional models. Comput Methods Appl Mech Eng. 2010;199:1872–80.

4. 4.

Chinesta F, Keunings R, Leygue A. The proper generalized decomposition for advanced numerical simulations: a primer. Cham: Springer; 2014.

5. 5.

Zlotnik S, Díez P, Gonzalez D, Cueto E, Huerta A. Effect of the separated approximation of input data in the accuracy of the resulting PGD solution. Advanced modeling and simulation in engineering sciences. 2015;2:28.

6. 6.

Ammar A, Huerta A, Chinesta F, Cueto E, Leygue A. Parametric solutions involving geometry: a step towards efficient shape optimization. Comput Methods Appl Mech Eng. 2014;268:178–93.

7. 7.

Courard A, Néron D, Ladevèze P, Ballere L. Integration of PGD-virtual charts into an engineering design process. Comput Mech. 2016;57:637–51.

8. 8.

de Almeida JPM. A basis for bounding the errors of proper generalised decomposition solutions in solid mechanics. Int J Numer Methods Eng. 2013;94:961–84.

9. 9.

Almeida JPM, Maunder EAW. Equilibrium finite element formulations. Chichester: Wiley; 2017. p. 1–274.

10. 10.

Prager W, Synge JL. Approximations in elasticity based on the concept of function space. Q Appl Math. 1947;5:241–69.

11. 11.

Modesto D, Zlotnik S, Huerta A. Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: Application to harbor agitation. Comput Methods Appl Mech Eng. 2015;295:127–49.

12. 12.

Chamoin L, Pled F, Allier P-E, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Comput Methods Appl Mech Eng. 2017;327:118–46.

13. 13.

Reis J, Almeida JP, Díez P, Zlotnik S. Error estimation for proper generalized decomposition solutions: a dual approach. Int J Numer Methods Eng. 2019;10:12. https://doi.org/10.1002/nme.6452.

14. 14.

Reis J, Almeida JPM, Díez P, Zlotnick S. Error estimation for pgd solutions: Dual analysis and adaptivity for quantities of interest (2020). Submitted for publication

## Acknowledgements

Jonatha Reis was supported by the European Education, Audiovisual and Culture Executive Agency (EACEA) under the Erasmus Mundus Joint Doctorate “Simulation in Engineering and Entrepreneurship Development (SEED)”, FPA 2013-0043.

Pedro Díez and Sergio Zlotink are grateful for the financial support provided by the Spanish Ministry of Economy and Competitiveness (Grant agreement No. DPI2017-85139-C2-2-R), the Generalitat de Catalunya (Grant agreement No. 2017-SGR-1278), and the project H2020-RISE MATH-ROCKS GA no 777778.

## Author information

Authors

### Contributions

All authors discussed the content of the article, based on their expertise on the subjects presented. JR and JP prepare and run the numerical examples. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Pedro Díez.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 