 Research Article
 Open Access
 Published:
General treatment of essential boundary conditions in reduced order models for nonlinear problems
Advanced Modeling and Simulation in Engineering Sciences volume 3, Article number: 7 (2016)
Abstract
Inhomogeneous essential boundary conditions must be carefully treated in the formulation of Reduced Order Models (ROMs) for nonlinear problems. In order to investigate this issue, two methods are analysed: one in which the boundary conditions are imposed in an strong way, and a second one in which a weak imposition of boundary conditions is made. The ideas presented in this work apply to the big realm of a posteriori ROMs. Nevertheless, an a posteriori hyperreduction method is specifically considered in order to deal with the cost associated to the nonlinearity of the problems. Applications to nonlinear transient heat conduction problems with temperature dependent thermophysical properties and time dependent essential boundary conditions are studied. However, the strategies introduced in this work are of general application.
Background
Currently, many engineering problems of practical importance are suffering from the socalled “curse of dimensionality” [1]. In this context, the need of optimising nonlinear multiphysics problems makes necessary to develop numerical techniques which can efficiently deal with the high computational cost characterising such applications. A widespread strategy is to consider the formulation of Reduced Order Models, which can be implemented by adopting either the Proper Orthogonal Decomposition (POD) method [2, 3], or the proper generalised decomposition (PGD) technique [4, 5]. The discussion in this paper only considers PODbased ROMs, from now on referred to as ROMs. The ideas presented here apply to the big realm of a posteriori ROMs, despite the fact that an a posteriori hyperreduction method, referred to as Hyper Reduced Order Model (HROM), is specifically considered in order to deal with the cost associated to the nonlinearity of the problems.
In what follows, let \(\mathcal {S}^h \subset \mathcal {S}\) and \(\mathcal {V}^h \subset \mathcal {V}\) be the trial and test finite dimension subspaces of the functional spaces \(\mathcal {S}\) and \(\mathcal {V}\) used in the definition of a variational problem. Generally, in the formulation of ROMs, an approximate solution \(\widehat{T}^h\) to \(T^h \in \mathcal {S}^h\) is sought in a subspace of \(\mathcal {S}^h\) by defining a new basis \({\varvec{X}} \in \mathbb {R}^{N \times k}\), where N is the number of degrees of freedom (DOFs) of the High Fidelity (HF) model and k is the dimension of the basis spanning the subspace of \(\mathcal {S}^h\). If a BubnovGalerkin projection is used, approximate versions \(\widehat{w}^h \in {span}\{ {\varvec{X}} \}\) of the test functions \(w^h\) are built, and functions \(T^h \in \mathcal {S}^h\) are approximated by affine translations of the test functions \(\widehat{w}^h\). In PODbased ROMs, the new basis \({\varvec{X}}\) is built by computing the singular value decomposition (SVD) [6] of a set of snapshots that are given by time instances of the spatial distribution of the solution of a training problem [7]. It is wellknown that the vectors comprising this basis inherit the behaviour of the snapshots [8], hindering the possibility of reproducing nonadmissible test functions. That is why careful attention must be paid on how the snapshots for building \({\varvec{X}}\) are collected. This issue is studied in detail in this work.
The concept of consistent snapshots collection procedures for nonlinear problems was first introduced by Carlberg et al. [9, 10]. As they pointed out in [10] “most nonlinear model reduction techniques reported in the literature employ a POD basis computed using as snapshots \(\{{\varvec{T}}_n  n=1, \cdots , n_t\}\) ^{Footnote 1}, which do not lead to a consistent projection”. In the last expression \(n_t\) is given by the number of time steps comprising the training problem and \({\varvec{T}}_n\) are the parameters such that \(T^h_n={\varvec{N}}^T{\varvec{T}}_n\) with \({\varvec{N}}\) given by the shape functions used for interpolation. The lack of consistency of these formulations is produced by the fact that when computing the POD basis with time instances of \(T^h\), that is by \(\{{\varvec{T}}_n  n=1, \ldots , n_t\}\), if nonzero essential boundary conditions are present, \({span}\{ {\varvec{X}} \} \not \subset \mathcal {V}^h \) because some elements \({\varvec{v}} \in {span}\{ {\varvec{X}} \}\) are not identically zero at the portion of the boundary with nonhomogeneous essential conditions.
Carlberg et al. [10] proposed two alternative procedures to collect snapshots for \(T^h\), for which the following comments apply when considering the general case of time dependent essential boundary conditions:

Snapshots of the form \(\{{\varvec{T}}_n  {\varvec{T}}_{n1}  n=1, \ldots , n_t\}\). The problem with this strategy is that the set of snapshots is characterised by a high frequency content, giving a less compressible SVD spectrum [11] than when using a collection procedure based on the snapshots of the solution. Another problem of this strategy is the handling of time dependent essential boundary conditions. In this case, it cannot be guaranteed that the snapshots will be identically zero at the boundary with essential boundary conditions.

Snapshots of the form \(\{{\varvec{T}}_n  {\varvec{T}}_0  n=1, \ldots , n_t \}\), where \({\varvec{T}}_0\) is the initial condition. With this strategy it cannot be guaranteed that functions in \({span}\{ {\varvec{X}} \}\) will be test functions, for instance, when essential boundary conditions are different from \(T^h_0\). Amsallem et al. [12] have observed that this strategy leads to more accurate ROMs than the previous strategy. As it is discussed in that work, using a different initial condition \({\varvec{T}}^*_0\) in the online stage requires in principle recomputing the snapshots for reconstructing the POD modes for projection. Several fast alternatives to solve this problem are proposed in [12].
Gunzburger et al. [13] presented two schemes for handling inhomogeneous essential boundary conditions in the context of ROMs, without performing any additional treatment to reduce the cost associated to nonlinearities. They assumed that the Dirichlet boundary is divided into a set of P nonoverlapping portions where the involved field is imposed as \(\beta _p(t)g_p({\varvec{x}})\), for \(p=1, \ldots , P\), where \(g_p\) are given functions and \(\beta _p\) are time dependent parameters. In a first approach, the solution is written in terms of a linear combination of test functions vanishing in the portion of the boundary with essential boundary conditions, and in terms of a linear combination of particular solutions of the steady state version of the problem to be solved. In a second approach, they proposed to express the solution in terms of a set of POD basis functions not vanishing on the Dirichlet boundary, and adding a set of equations describing the essential boundary condition. Then, they use the QR decomposition on the resulting system in order to obtain a set of test functions vanishing on that boundary. These techniques proved to work well in the context of ROMs [13]. However, in their work Gunzburger et al. did not consider any particular treatment for reducing the cost associated to nonlinearities. Besides, they did not propose any methodology for dealing with the inherent computational cost of the strong imposition of essential boundary conditions.
In the work of González et al. [1], the problem of imposing nonhomogeneous essential boundary conditions in the context of a priori model order reduction methodologies (PGD) is tackled. They imposed the Dirichlet conditions by constructing a global function that verifies the essential boundary conditions, using the technique of transfinite interpolation [14]. A good example of interpolation functions is given by the inverse distance function, and as exposed by Rvachev et al. [14], different interpolation functions can be built based on the theory of Rfunctions. Although the methodology presented by González et al. is really appealing and show particular advantages in the PGD context, its use requires large symbolic algebra computations, leading to very complex algebraic expressions even in the case of quite simple academic problems that could hinder its application to practical problems. In the current study, we are looking to develop physicallybased techniques that can be easily applied to domains coming from threedimensional industrial problems.
In this work we analyse the treatment of time dependent inhomogeneous essential boundary conditions from a general point of view, taking into consideration the costs associated to nonlinear problems and to the strong imposition of the essential boundary conditions. Alternatives based on the weak imposition of the boundary conditions are evaluated, combined with a reduction of the number of degrees of freedom at the boundary. The presented ideas are applied to nonlinear transient heat conduction problems with temperature dependent thermophysical properties; however, the introduced strategies are of widely general application.
Methods
This section describes first the problem statement and two variational formulations, one weakly imposing Dirichlet boundary conditions and the other strongly imposing those conditions. Then, an HROM that considers strong enforcement of boundary conditions is presented. Finally, the formulation of two alternative HROMs that weakly impose Dirichlet conditions is introduced.
Problem statement, variational formulation and finite element discretisation
The physical problem under consideration is a nonlinear transient heat conduction problem, with temperature dependent thermophysical properties. The problem is described by the equation
where \(\rho \) is the density, k is the thermal conductivity, c the heat capacity, T is the temperature, Q is the external heat source per unit volume, and \(\Omega \) is the space domain. The temperature field should verify the initial condition \(T({\varvec{x}},t=0) = T_0 \ \forall \ {\varvec{x}}\in \Omega \), where \(T_0\) is the given initial temperature field. Additionally, the following set of conditions must be verified at the disjoint portions \(\Gamma _d, \Gamma _q, \Gamma _c\) of the external boundary: \(T_{\Gamma _d}=T_d\), \(k\nabla T\cdot \mathbf {n}_{\Gamma _q}=q_w\) and \(k\nabla T\cdot \mathbf {n}_{\Gamma _c}=h_f(T_fT)\), where \(\Gamma _d \cup \Gamma _q \cup \Gamma _c = \partial \Omega \), and where \(T_d\) is the imposed temperature at the boundary \(\Gamma _d\), \(q_w\) is the external heat flow at the boundary \(\Gamma _q\), \(h_f\) is the heat convection coefficient, \(T_f\) is the external fluid temperature at the portion the boundary \(\Gamma _c\) and \(\mathbf {n}\) is the outward normal to the boundary under consideration.
In what follows, we briefly present the variational formulation of the problem and its finite element discretisation. Essential boundary conditions can be enforced strongly or weakly. In order to strongly enforce Dirichlet boundary conditions, let \(\mathcal {S} = \{ T \in \mathcal {H}^1(\Omega )\ / \ T_{\Gamma _d} = T_d \}\) be the space of trial solutions and \(\mathcal {V} = \{ v \in \mathcal {H}^1(\Omega )\ / \ v_{\Gamma _d} = 0 \}\) be the space of weighting or test functions, where \(\mathcal {H}^1\) is the first order Sobolev space. Then, the variational formulation is given as follows: Find \(T \in \mathcal {S}\) such that \(\forall w \in \mathcal {V}\)
Let \(\mathcal {S}^h \subset \mathcal {S}\) and \(\mathcal {V}^h \subset \mathcal {V}\) be subspaces of the trial and test functional spaces. Then, in matrix notation, \(T^h \in \mathcal {S}^h\) is given by \(T^h({\varvec{x}},t_n) = {\varvec{N}}^T {\varvec{T}}_n\), where \({\varvec{N}}\) denotes the finite element basis and \({\varvec{T}}_n \in \mathbb {R}^N\) denotes the FEM degrees of freedom, with N the dimension of the FEM space. Then, using a BubnovGalerkin projection and a modified BackwardEuler scheme for time integration, the residual of the nonlinear thermal problem in its discrete expression reads [11]
where
In order to weakly impose Dirichlet boundary conditions, the use of Lagrange multipliers is adopted. The idea is to remove from the trial and test function spaces, the constraint over the portion of the boundary corresponding to essential boundary conditions. Accordingly, let \(\mathcal {V} = \{ v \in \mathcal {H}^1(\Omega )\}\) be the space of trial and test functions for the temperature, and let \(\mathcal {Q} = \{ q \in \mathrm {L}_2(\Gamma )\}\) be the space of trial and test functions for the Lagrange multipliers. Then, the variational formulation is given as follows: Find \((T,\lambda ) \in \mathcal {V} \times \mathcal {Q}\) such that \(\forall (w,q) \in \mathcal {V} \times \mathcal {Q}\)
As it was similarly done before, let \(\mathcal {V}^h \subset \mathcal {V}\) and \(\mathcal {Q}^h \subset \mathcal {Q}\). Therefore, in matrix notation, \(T^h \in \mathcal {V}^h\) and \(\lambda ^h \in \mathcal {Q}^h\) are given by \(T^h({\varvec{x}},t_n) = {\varvec{N}}^T {\varvec{T}}_n\) and \(\lambda ^h({\varvec{x}},t_n) = \bar{{\varvec{N}}}^T \varvec{\lambda }_n\) where \({\varvec{N}}\) denotes the finite element basis for the temperature field, and \({\varvec{T}}_n \in \mathbb {R}^N\) denotes the FEM nodal degrees of freedom. Similarly, \(\bar{{\varvec{N}}}\) denotes the finite element basis for the Lagrange multipliers, and \(\varvec{\lambda }_n \in \mathbb {R}^{N_\lambda }\) denotes the parameters corresponding to the Lagrange multipliers. Then, the residual characterising the FEM discretisation can be written as
where the new terms with respect to the previous formulation are given by
HROM formulation by strongly enforcing boundary conditions
The HROM associated to the formulation given by Eq. (3) is here introduced. Each nonlinear contribution to \(\varvec{\varPi }_n\) is hyperreduced separately as done by Cosimo et al. [11]. Therefore, each of these terms has associated a particular POD basis \(\varvec{\varPhi }_i\) for its gappy data reconstruction [15–17]. In what follows, suffices \(i \in \{ c,k,f,q\}\) are used to identify each term. We emphasise that the sampling is performed independently for each term, but the number of sampling points \(n_s\) and the number of gappy modes \(n_g\) are always the same for all of them. In what follows, \(\widehat{\cdot }\) denotes the vector of \(n_s\) components sampled from the associated complete term. To compute the POD modes \(\varvec{\varPhi }_i\), snapshots are taken for each individual contribution at each time step, after convergence of the NewtonRaphson scheme.
To obtain the hyperreduced residual \(\varvec{\varPi }^p_n\) we project the gappy approximation to \(\varvec{\varPi }_n\) with the basis \({\varvec{X}}\), which leads to
where \({\varvec{A}}_i = {\varvec{X}}^T\varvec{\varPhi }_i(\widehat{\varvec{\varPhi }}_i^T\widehat{\varvec{\varPhi }}_i)^{1}\widehat{\varvec{\varPhi }}_i^T\), with \(i \in \{c,k,f,q\}\). Note that matrices \({\varvec{A}}_i\) are computed in the offline stage.
In what follows, a consistent snapshot collection strategy for \({\varvec{X}}\) taking into account general essential boundary conditions is introduced. When solving the variational problem given by Eq. (2) in a finite dimensional space, an approximate solution \(T^h \in \mathcal {S}^h\) is described as \(T^h = T_d^h + v^h\), where \(v^h \in \mathcal {V}^h\) and \(T_d^h\) is the finite dimensional version of \(T_d\). Then, the trial solutions \(T^h\) and the test functions \(w^h\) are given by
where \({\varvec{\eta }}\) are the parameters associated to the test functions and the DOFs \({\varvec{T}}_n\) are discriminated in terms of parameters describing the boundary with Dirichlet boundary conditions, \({\varvec{T}}^B\), and the DOFs \({\varvec{T}}^I\) that are not part of that boundary. Functions \({\varvec{N}}^I\) and \({\varvec{N}}^B\) are the FEM shape functions associated to the internal and boundary DOFs, respectively.
Functions with global support are used in the context of ROMs, in contrast to FEM basis functions whose support is local. Therefore, the notion of internal/boundary degrees of freedom is lost in ROMs, making it necessary to express \(T^h\) and \(w^h\) as
where \({\varvec{a}}_n\) and \({\varvec{w}}_n\) are the amplitudes associated to the modes \({\varvec{X}}\).
In order to get admissible test functions \(\widehat{w}^h\), the restriction \(\widehat{w}^h_{\Gamma _d}=0\) must be satisfied. That is why, for the design of a consistent snapshot collection strategy, the snapshots must be of the form \({\varvec{T}}  {\varvec{T}}_d\). Then, the problem resides in the correct description of \(T_d^h\). A possible solution is to describe it as in standard FEM, i.e., \(T_d^h = {\varvec{N}}^{B,T}{\varvec{T}}^B_n\), but this could lead to a snapshots set with a very high frequency content, decreasing the compressibility of the signal [11].
In order to avoid this inconvenience, we propose to compute a set of static modes that describe the behaviour of the portion of the boundary with essential boundary conditions. The procedure is similar to that followed by the Craig–Bampton or by the Guyan–Irons methods [18–21]. Since we want to build a set of static modes to describe the boundary, we consider only the term \({\varvec{G}}^k\) in Eq. (3). Simplifying notation, this term at time instant \(t_n\) is given by \({\varvec{G}}^k = {\varvec{K}} {\varvec{T}}\), where \({\varvec{K}}\) is any linearisation of the stiffness matrix and \({\varvec{T}} \equiv {\varvec{T}}_n\). We can neglect nonlinearities at this point because we are only interested in finding a basis for expressing the essential boundary conditions. Then, by partitioning in internal and boundary DOFs, we write:
from which we get by static condensation \({\varvec{T}}^I =  {\varvec{K}}_{II}^{1} {\varvec{K}}_{IB} {\varvec{T}}^B\). In this case \({\varvec{T}}^I\) can be regarded as the response to an imposed temperature \({\varvec{T}}^B\) in the portion of the boundary where only the term \({\varvec{G}}^k\) is considered. Then, the static modes, which describe the response to unit temperatures imposed at \(\Gamma _d\), are given by
and the function \(T_d^h\) used to denote the essential boundary conditions is assumed to lie in \({span}\{ {\varvec{\varPhi }}_B \}\). We remark that this procedure is similar to the method proposed by Gunzburger et al. [13] that considers particular solutions derived from the steady state version of the system of equations, but interpreted from a different perspective.
Then, the approximation \(\widehat{T}^h\) is given by \(\widehat{T}^h = {\varvec{N}}^T {\varvec{\varPhi }}_B {\varvec{T}}^B_n + {\varvec{N}}^T {\varvec{X}} {\varvec{a}}_n \simeq T^h\). Note that the static modes \({\varvec{\varPhi }}_B\) have the property to be interpolatory at the boundary \(\Gamma _d\), thus \({\varvec{T}}^B_n\) has the physical interpretation to be the value of the field at the nodes lying on \(\Gamma _d\).
From this equation the following snapshots collection procedure arises: once the static modes were computed, take snapshots of the form \(S_p=\{{\varvec{T}}_n  {\varvec{\varPhi }}_B {\varvec{T}}^B_n  n=1, \ldots , n_t \}\). This strategy has the following advantages:

The snapshots given by \(S_p\) tend to preserve the compressibility posed by the field \(T^h\).

General essential boundary conditions can be represented by \({\varvec{\varPhi }}_B\), while keeping simple the process of imposing essential boundary conditions because of the interpolatory property of \({\varvec{\varPhi }}_B\) at \(\Gamma _d\).

Using different initial conditions in the online stage does not require recomputing the snapshots for \({\varvec{X}}\) or considering another alternative.
It should be observed that the computational cost increases with the number of static modes. This is because, on each Newton iteration, the temperature field must be computed at least on the nodes involved by the gappy data procedure. That is, the cost of the operation \({\varvec{\varPhi }}_B {\varvec{T}}^B_n\) can be very high if a large number of static modes is used. In the worst case scenario, the number of static modes is given by the number of DOFs at the portion of the boundary with essential boundary conditions. In some cases, additional assumptions can be adopted to reduce the number of static modes. For instance, if the shape of the essential boundary condition does not change in time in some portion \(\Gamma _d^\theta \) of \(\Gamma _d\), a new static mode \({\varvec{\Theta }}\) can be built by summing up all the static modes with support on \(\Gamma _d\) times the considered shape factor. A more general alternative is to describe the behaviour of the boundary by additionally approximating the boundary parameters \({\varvec{T}}^B_n\) by \({\varvec{T}}^B_n = \varvec{\Psi }_B {\varvec{d}}_n^\psi \), where \(\varvec{\Psi }_B\) are POD modes computed from a set of snapshots representative of the behaviour of the boundary, and \({\varvec{d}}_n^\psi \) are the associated parameters. It should be noted that this kind of idea was already applied in substructuring of linear problems [22, 23].
Remark 1
In the examples section, all the static modes associated to the portion of the boundary with essential boundary conditions are retained, and no other approximation is applied to the boundary DOFs.
HROM formulation by weakly enforcing boundary conditions
Two alternative HROMs associated to the formulation given by Eq. (10) are now introduced, aimed at reducing the temperature DOFs \({\varvec{T}}_n\) and Lagrange multipliers \(\varvec{\lambda }_n\). In the first one, \([{\varvec{T}}_n; \varvec{\lambda }_n]\) is reduced as a unit like \([{\varvec{T}}_n; \varvec{\lambda }_n] = {\varvec{X}}^{\varvec{c}} {\varvec{c}}_n\), where the POD modes \({\varvec{X}}^{\varvec{c}}\) are built from a set of snapshots composed by the temperature field and the Lagrange multipliers, and \({\varvec{c}}_n\) denote the associated parameters. A second alternative is to reduce each physical quantity separately like \({\varvec{T}}_n = {\varvec{X}} {\varvec{a}}_n\) and \(\varvec{\lambda }_n = {\varvec{Y}} {\varvec{b}}_n\), where \({\varvec{X}}\) and \({\varvec{a}}_n\) are the POD modes and the parameters associated to the temperature field, and \({\varvec{Y}}\) and \({\varvec{b}}_n\) are the POD modes and the parameters associated to the Lagrange multipliers. From a general point of view, weakly enforcing boundary conditions has the following advantages with respect to the use of static modes to represent essential boundary conditions:

Test functions for the temperature field are not required to meet the constraint \(T_{\Gamma _d} = 0\).

As previously introduced, the cost of computing the product \({\varvec{\varPhi }}_B {\varvec{T}}^B_n\) can be a penalising factor when a large number of static modes is required. By using Lagrange multipliers, this problem can be avoided.
When adopting the first option, the residual \(\varvec{\varPi }_n\) given by Eq. (10) is projected to the space spanned by \({\varvec{X}}^{\varvec{c}}\) and each term is separately hyperreduced as done in [11], and the expression is quite similar to the one given by Eq. (13) but taking into account the terms involving the restriction over the Dirichlet boundary.
In the second approach proposed in this section, each term of the residual \(\varvec{\varPi }_n\) from Eq. (10) is projected separately according to
Then, again, each contribution is separately hyperreduced following the work of Cosimo et al. [11]. It should be observed that this option is more difficult to implement than the process of reducing \([{\varvec{T}}_n; \varvec{\lambda }_n]\) as a unit: the DOFs must be partitioned into temperature DOFs and Lagrange multipliers, which complicates the implementation of the gappy data procedure as these two different unknowns are represented by two different vectors.
Remark 2
The techniques presented here apply to higher order problems as well. For instance, let us consider a fourth order onedimensional problem in which Hermite polynomials are used in the FEM discretisation. In the case of imposing the essential boundary conditions strongly, the procedure for computing static modes is applied exactly in the same way as described before, with static modes obtained by imposing unit displacements and unit rotations at the boundary. In the case of imposing weakly the essential boundary conditions, the only difference with the thermal case is that we will have independent Lagrange multipliers fields for each degree of freedom. We note finally that an extension to fourth order problems of the techniques presented in [1] in the context of the a priori reduced order method PGD, was proposed by Quesada et al. [24].
Results and discussion
We will show the application of the proposed snapshot collection strategies to two nonlinear transient heat conduction problems with time dependent essential boundary conditions. To assess the performance and robustness of the proposed methods, we study the relative error introduced by the HROM. The relative error \(\epsilon \) characterising the HROM as a function of time is measured as \(\frac{\Vert T_R  T_H\Vert }{\max \limits _{t}\Vert T_H\Vert }\), where \(T_R\) is the solution obtained with the HROM, \(T_H\) is the High Fidelity solution for same problem and \(\Vert \cdot \Vert \) denotes the \(L_2\) norm. Trilinear hexahedral elements are used in the examples to interpolate the temperature field. The Lagrange multipliers field is interpolated with bilinear quadrilateral elements. In what follows, \(n_p\) is used to denote the number of POD modes for \({\varvec{T}}_n\), and \(n_{\lambda }\) is used to denote the number of POD modes for \(\varvec{\lambda }_n\).
Example 1
This example has been presented by Gunzburger et al. [13]. It consists of a linear transient heat conduction problem with \(\rho =k=c=1\) and with a nonlinear heat source equal to \(Q(T)=T^2\). The domain to be analysed is a \(1 \times 1 \times 0.1428\) cuboid. It is discretised using trilinear hexahedral elements with a total of 675 degrees of freedom. A time step \(\Delta t = 0.01\) is used for the time interval [0, 1]. The body is initially at temperature \(T_0=0\). A time dependent essential boundary condition, \(T_d({\varvec{x}},t)\), is imposed, given by
The sides \(z=0\) and \(z=0.1428\) of the domain are insulated. Different time instants of the High Fidelity solution of the problem can be observed in Fig. 1. A total of 20 gappy points and 20 gappy modes were used in all cases.
The error obtained using static modes to represent the essential boundary condition can be observed in Fig. 2, where different numbers of projection modes were considered. As it can be seen, good results are obtained. Additionally, the results are comparable to the ones obtained by Gunzburger et al.
We have two alternatives for weakly imposing the essential boundary conditions. When reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit, the error behaves as shown in Fig. 3. Although the obtained results seem to be quite good, we remark that convergence is not achieved for the cases \(n_p<7\), \(n_p=10\) and \(n_p=11\). Monotone convergence, for any number of projection modes, is achieved only when using 12 or more modes. This behaviour is related to the fact that the temperature field must have enough freedom to be able to meet the restrictions imposed by the Lagrange multipliers.
The error obtained when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately can be observed in Fig. 4. In these tests, we took \(n_\lambda =4\). It should be kept in mind that \(n_p\) should be greater than \(n_\lambda \), otherwise \({\varvec{T}}_n\) will not have enough freedom to satisfy the restrictions. In this case, convergence can be achieved for \(n_p \ge 4\), but a good approximation error to the temperature field is observed for \(n_p \ge 7\). We remark that when \(n_\lambda > 4\) in this numerical experiment, a bad conditioning of the reduced iteration matrix was obtained. A pivoting strategy was used to get convergence, with elimination of the equations associated to zero pivots, and it was observed that most of the time the constraint equations corresponding to modes higher than four were eliminated.
When comparing the three different alternatives, it is observed that the lowest error option is when using static modes. Nevertheless, the cost is higher than in the strategies that impose the Dirichlet boundary conditions weakly. Concerning the two latter alternatives, it is observed that the strategy of reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit leads to the lowest errors, for the same number of reduced DOFs. For example, when using that alternative with \(n_p=12\) the error for the temperature field is \(O(10^{4})\), but when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately with \(n_p=8\) and \(n_\lambda =4\), the error is \(O(10^{3})\). The approximation error to \(\varvec{\lambda }_n\) is always lower when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit. However, as seen in the numerical experiments, the number of POD modes needed to describe \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit must be fairly large in order to provide enough freedom to the temperature field to satisfy the restrictions imposed by the Lagrange multipliers.
Example 2
We consider next a nonlinear transient heat conduction problem, where the heat capacity is \(c=0.1792 ~ T + 495.20\) and the thermal conductivity is \(k=0.25~ T + 70\). The material density is \(\rho = 1\). The domain to be analysed is a \(\pi \times \pi \times 0.4487\) cuboid. It is discretised using trilinear hexahedral elements with a total of 675 degrees of freedom. A time step \(\Delta t = 1\) is used for the time interval [0, 600]. The body is initially at temperature \(T_0=1200\). A time dependent essential boundary condition is imposed on side \(x=0\). The other sides of the domain are insulated. The essential boundary condition \(T_d({\varvec{x}},t)\) is given by
Different time instants of the High Fidelity solution of the problem can be observed in Fig. 5. A total of 60 gappy points and 60 gappy modes were used in all cases, except for the equations corresponding to the Lagrange multipliers when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately, where five gappy points and modes were used.
The results obtained for the different schemes can be observed in Figs. 6, 7 and 8. Similar comments as in the previous example apply in this case. We remark that the scheme that weakly imposes the Dirichlet boundary conditions and that reduces \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as unit, begins to converge for \(n_p \ge 13\).
Conclusion
Several alternatives for building HyperReduced Order Models to solve nonlinear thermal problems with time dependent inhomogeneous essential boundary conditions were analysed and compared.
One strategy considers the use of static modes for strongly imposing the boundary conditions. This approach is similar to the method presented by Gunzburger et al. [13] who proposed to use particular solutions instead of static modes. A good behaviour was obtained by using static modes and the results were comparable to the ones obtained by Gunzburger et al. Even though this method proved to be a robust technique for describing essential boundary conditions, the associated computational cost is high for models that require a large number of static modes.
In order to work out the disadvantages of the static modes approach, two other alternatives that are based on a weak imposition of the essential boundary conditions were studied. One alternative consists in reducing the primal and the secondary fields as a unit, while the other consists in reducing them separately. It was observed that, for the same number of reduced DOFs, the former approach led to the lowest errors for the primal (temperature) field. The performed numerical experiments also made evident that the number of POD modes used for describing the primal and the secondary fields as a unit, must be large enough in order to provide enough freedom to the primal (temperature) field to satisfy the restrictions imposed by the Lagrange multipliers.
In a future work, the case with time dependent variation of the support of the essential boundary conditions will be studied.
Notes
 1.
For the sake of conciseness, in this work we do not consider the objective function \(T^h\) to depend on a set of analysis parameters \({\varvec{\mu }}\). If this were the case, the snapshots collection strategies introduced herein apply directly just by applying them to each of the training parameters \({\varvec{\mu }}_i\).
References
 1.
González D, Ammar A, Chinesta F, Cueto E. Recent advances on the use of separated representations. Int J Numer Methods Eng. 2010;81(5):637–59.
 2.
Kunisch K, Volkwein S. Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM J Numer Anal. 2002;40(2):492–515.
 3.
Bergmann M, Bruneau CH, Iollo A. Enablers for robust POD models. J Comput Phys. 2009;228(2):516–38.
 4.
Néron D, Ladevèze P. Proper generalized decomposition for multiscale and multiphysics problems. Arch Comput Methods Eng. 2010;17(4):351–72.
 5.
Chinesta F, Ammar A, Cueto E. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Arch Comput Methods Eng. 2010;17(4):327–50.
 6.
Strang G. The fundamental theorem of linear algebra. Am Math Mon. 1993;100(9):848–55.
 7.
Sirovich L. Turbulence and the dynamics of coherent structures. I—coherent structures. II—symmetries and transformations. III—dynamics and scaling. Q Appl Math. 1987;45:561–71.
 8.
Chatterjee A. An introduction to the proper orthogonal decomposition. Curr Sci. 2000;78(7):808–17.
 9.
Carlberg K, BouMosleh C, Farhat C. Efficient nonlinear model reduction via a leastsquares Petrov–Galerkin projection and compressive tensor approximations. Int J Numer Methods Eng. 2011;86(2):155–81.
 10.
Carlberg K, Farhat C, Cortial J, Amsallem D. The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47.
 11.
Cosimo A, Cardona A, Idelsohn S. Improving the kcompressibility of hyper reduced order models with moving sources: applications to welding and phase change problems. Comput Methods Appl Mech Eng. 2014;274:237–63.
 12.
Amsallem D, Zahr MJ, Farhat C. Nonlinear model order reduction based on local reducedorder bases. Int J Numer Methods Eng. 2012;92(10):891–916.
 13.
Gunzburger MD, Peterson JS, Shadid JN. Reducedorder modeling of timedependent PDEs with multiple parameters in the boundary data. Comput Methods Appl Mech Eng. 2007;196(4–6):1030–47.
 14.
Rvachev VL, Sheiko TI, Shapiro V, Tsukanov I. Transfinite interpolation over implicitly defined sets. Comput Aided Geom Des. 2001;18(3):195–220.
 15.
Everson R, Sirovich L. Karhunen–Loeve procedure for gappy data. J Opt Soc Am A. 1995;12:1657–64.
 16.
Ryckelynck D. Hyperreduction of mechanical models involving internal variables. Int J Numer Methods Eng. 2009;77(1):75–89.
 17.
Hernández JA, Oliver J, Huespe AE, Caicedo MA, Cante JC. Highperformance model reduction techniques in computational multiscale homogenization. Comput Methods Appl Mech Eng. 2014;276:149–89.
 18.
Guyan RJ. Reduction of stiffness and mass matrices. AIAA J. 1965;3(2):380.
 19.
Irons B. Structural eigenvalue problemselimination of unwanted variables. AIAA J. 1965;3(5):961–2.
 20.
Craig R, Bampton M. Coupling of substructures for dynamic analysis. AIAA J. 1968;6:1313–9.
 21.
Géradin M, Cardona A. Flexible multibody dynamics: a finite element approach. London: Wiley; 2001.
 22.
Craig R, Chang CJ. Substructure coupling for dynamic analysis and testing. Technical Report CR2781, NASA. 1977.
 23.
Rixen DJ. Force modes for reducing the interface between substructures. In: SPIE proceedings series. Society of PhotoOptical Instrumentation Engineers; 2002.
 24.
Quesada C, Xu G, González D, Alfaro I, Leygue A, Visonneau M, Cueto E, Chinesta F. Un método de descomposición propia generalizada para operadores diferenciales de alto orden. Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería. 2014;31:188–97.
Acknowledgments
All authors contributed to the development of the theory. The computer code for the numerical simulations was developed by AC. All authors read and approved the final manuscript.
Author information
Ethics declarations
Acknowledgements
This work received financial support from CONICET Consejo Nacional de Investigaciones Científicas y Técnicas (PIP 1105), Agencia Nacional de Promoción Científica y Tecnológica (PICT 20132894), and Universidad Nacional del Litoral (CAI+D2011) from Argentina, and from the European Research Council under the Advanced Grant: ERC2009AdG “Real Time Computational Mechanics Techniques for MultiFluid Problems”.
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Cosimo, A., Cardona, A. & Idelsohn, S. General treatment of essential boundary conditions in reduced order models for nonlinear problems. Adv. Model. and Simul. in Eng. Sci. 3, 7 (2016) doi:10.1186/s4032301600588
Received
Accepted
Published
DOI
Keywords
 HROM
 Reduced Order Models
 Essential boundary conditions
 FEM