Skip to main content
  • Research article
  • Open access
  • Published:

Real-time data assimilation and control on mechanical systems under uncertainties

Abstract

This research work deals with the implementation of so-called Dynamic Data-Driven Application Systems (DDDAS) in structural mechanics activities. It aims at designing a real-time numerical feedback loop between a physical system of interest and its numerical simulator, so that (i) the simulation model is dynamically updated from sequential and in situ observations on the system; (ii) the system is appropriately driven and controlled in service using predictions given by the simulator. In order to build such a feedback loop and take various uncertainties into account, a suitable stochastic framework is considered for both data assimilation and control, with the propagation of these uncertainties from model updating up to command synthesis by using a specific and attractive sampling technique. Furthermore, reduced order modeling based on the Proper Generalized Decomposition (PGD) technique is used all along the process in order to reach the real-time constraint. This permits fast multi-query evaluations and predictions, by means of the parametrized physics-based model, in the online phase of the feedback loop. The control of a fusion welding process under various scenarios is considered to illustrate the proposed methodology and to assess the performance of the associated numerical architecture.

Introduction

The continuous interaction between physical systems and high-fidelity simulation tools (i.e. virtual twins) has become a key enabler for industry as well as an appealing research topic along the last decade (see for instance [11]). This is at the heart of the Dynamic Data Driven Application System (DDDAS) concept [12], in which a simulation model is used to make decisions and drive an evolving physical system, and is in the same time fed by data collected on this system in order to update parameters and ensure the continual consistency between numerical predictions and physical reality. In other words, the DDDAS concept aims at building a numerical feedback loop between the physical system and its simulator, with on-the-fly data assimilation and control (Fig. 1). Nevertheless, there are two main numerical challenges in the implementation of such a loop for structural mechanics applications. On the one hand, the dialog between numerical models and physical systems is in practice subject to several sources of uncertainty, including measurement noise, modeling errors, or variabilities in the system properties and environment. On the other hand, a relevant feedback loop requires effective numerical methods such that real-time computations and interactions can be performed.

Fig. 1
figure 1

Scheme of the DDDAS feedback control loop

The paper presents a general strategy, addressing the two previous challenges, for the design of an effective numerical feedback loop between a physical system and its simulator. It considers a stochastic framework for sequential data assimilation and control, that uses Bayesian inference for model updating from in situ data as well as uncertainty propagation to make predictions from the model and synthesize control laws. Such a framework considers parameters to be inferred as random variables, and it naturally takes all uncertainty sources into account [2, 6, 17, 22, 30, 31].

The proposed strategy also leans on two ingredients which permit to achieve the real-time constraint. First, Transport Map sampling [13] is used as an alternative to Markov Chain Monte-Carlo (MCMC) [14, 25] or Sequential Monte-Carlo [1] techniques in order to perform fast Bayesian inference with convenient sampling of multi-dimensional posterior densities and associated adaptive strategies. The Transport Map technique consists in building a deterministic polynomial mapping between the posterior probability measure of interest and a simple reference measure (e.g. Gaussian distribution) [21, 23, 29]. It thus permits an automatic exploration, from the constructed mapping, of the multi-parametric stochastic space in order to effectively derive useful information such as means, standard deviations, maxima, or marginals on model parameters. Such pieces of information can then be propagated to model outputs in order to quantify uncertainty, synthesize the appropriate command in a stochastic context, and thus make safe decision on the evolving system.

Second, model reduction by means of the Proper Generalized Decomposition (PGD) technique [9, 10] is introduced in order to reduce the computational effort for the evaluation of multi-parametric numerical models, and therefore further speed up the overall process. The PGD approximation builds a modal representation of the multi-parametric model solution with separated variables and explicit dependency on model parameters. This representation is computed in an offline phase with controlled accuracy [8] before being evaluated at low cost in the online phase. It is shown in the paper that the PGD technique (i) facilitates the computation of the likelihood function involved in the Bayesian inference framework [3, 26]; (ii) can be effectively coupled with Transport Map sampling for the calculation of the maps, as it directly provides information on solution derivatives [27, 28]; (iii) is a particularly effective tool for performing uncertainty propagation through the forward model as well as command law synthesis. A particular focus is made here on the latter point dealing with effective command in a stochastic framework; this has been investigated in very few works of the literature, even though it is a major aspect of the DDDAS procedure. The dynamic command synthesis we propose, using advantages of Transport Map sampling and PGD model reduction, is the main novelty of the paper. It permits the construction and implementation of the full DDDAS feedback loop.

The constructed feedback loop is here illustrated in the context of a fusion welding process. It involves a simplified welding model introduced in [16] (and described in Fig. 2), which is supposed to be an accurate enough representation of the physical phenomena of interest.

Fig. 2
figure 2

Illustration of the considered welding model

In this two-dimensional model, two metal plates are welded by a heat source whose center is moving along the geometry. The problem unknown is the dimensionless temperature field T in the space domain \(\Omega \) and over the time domain I; \(T=0\) when the temperature is equal to the room temperature, and \(T=1\) when the temperature is equal to the melting temperature of the material. On the right-hand side boundary \(\Gamma _D\) (see Fig. 2), the temperature is supposed to be equal to the room temperature (\(T=0\)). The other boundaries are supposed to be insulated.

To solve the problem, the system of coordinates is made moving at the same speed as the heat source. Thus, the model problem is described by the following heat equation with convective term:

$$\begin{aligned} \frac{\partial T}{\partial t} + {\underline{v}}(Pe)\cdot {\underline{\text {grad}}} T - \kappa \Delta T = s(\sigma ) \end{aligned}$$
(1)

where \({\underline{v}}=[Pe; 0]\) is the advection velocity, \(Pe=v\cdot {L_c/\kappa }\) is the Peclet number (\(L_c\) being the characteristic length of the problem), and \(\kappa \) is the thermal diffusivity of the material. The volume heat source term s is defined by the following Gaussian repartition in the space domain:

$$\begin{aligned} s(x,y;\sigma )=\frac{u}{2 \pi \sigma ^2} \text {exp} \left( - \frac{ \left( x-x_c\right) ^2+ \left( y-y_c\right) ^2 }{2 \sigma ^2 } \right) \end{aligned}$$
(2)

where coordinates \((x_c,y_c)\) represent the location of the heat source center, u is the magnitude, and \(\sigma \) is a scalar parameter that drives the source expansion.

From the integration of (1) over \(\Omega \), the weak formulation in space of the problem is of the form: find \(T \in {\mathcal {T}}\) such that

$$\begin{aligned} a(T,T^*)=l(T^*) \quad \forall T^* \in {\mathcal {T}} \end{aligned}$$
(3)

with:

$$\begin{aligned} \begin{aligned} a(T,T^*)&=\int _{\Omega } \left\{ (\frac{\partial T}{\partial t} + {\underline{v}}\cdot {\underline{\text {grad}}} T)\cdot {T^*}+\kappa \cdot {\underline{\text {grad}}} T\cdot \underline{\text {grad}} T^*\right\} d \Omega \\ l(T^*)&= \int _{\Omega } s\cdot {T^*d} \Omega \end{aligned} \end{aligned}$$
(4)

The functional space \({\mathcal {T}}\) is the Bochner space \(L^2(I;{\mathcal {S}}) \simeq {\mathcal {S}} \otimes {\mathcal {I}}\), with \({\mathcal {S}} = H^1_{0|\Gamma _D}\) the Sobolev space of \(H^1\) functions on \(\Omega \) satisfying homogeneous Dirichlet boundary conditions on \(\Gamma _D\), and \({\mathcal {I}}=L^2(I)\) the Lebesgue space.

The model parameters to be updated from indirect noisy data are \({\mathbf {p}}=\{\sigma ,Pe\}\), which are respectively related to the spatial spreading and speed of the heat source as illustrated in Fig. 3. They may be varying over the time domain. Data consist in the measurement of temperatures \(T_1\) and \(T_2\) at two points in \(\Omega \) (see Fig. 2). From these data assimilated sequentially in time, the purpose is twofold: (i) to dynamically update the model parameters \({\mathbf {p}}\); (ii) to control from the updated model the temperature \(T_3\) at another point in \(\Omega \), which is the output of interest assumed to be unreachable by direct measurement, and perform corrections on the welding process if necessary. The control variable is the magnitude u of the heat source, that is supposed to be piecewise constant in time as illustrated in Fig. 3.

Fig. 3
figure 3

Illustration of the two model parameters (left and center), and time evolution of the command (right)

The paper outline is as follows: in “Reduced order modeling using PGD” section, the PGD model reduction applied to the above reference model is detailed. It is then employed in association with Bayesian inference and Transport Map sampling for fast data assimilation and model updating in “Real-time data assimilation with Bayesian inference and Transport Map sampling” section. All these tools are beneficially reused for on-the-fly command synthesis and system control in “Real-time control” section. Several numerical experiments are reported in “Results and discussion” section, which show the interest and performance of the proposed feedback loop by considering various welding scenarios. Sequential data assimilation, uncertainty propagation up to the output of interest, and real-time control of the welding process are illustrated for each of these scenarios. Eventually, conclusions and prospects are drawn in “Conclusions” section.

Methods

Reduced order modeling using PGD

Due to the increasing number of high-dimensional approximation problems, which naturally arise in many situations such as optimization or uncertainty quantification, model reduction techniques have been the object of a growing interest and are now a mature technology [19, 24]. Tensor methods are among the most prominent tools for the construction of model reduction techniques as in many practical applications, the approximation of high-dimensional solutions of Partial Differential Equations (PDEs) is made computationally tractable by using low-rank tensor formats. In particular, an appealing technique based on a canonical format and referred to as Proper Generalized Decomposition (PGD) was introduced and successfully used in many applications of computational mechanics dealing with multiparametric problems [5, 7, 9, 10, 15, 18, 20]. Contrary to POD, the PGD approximation does not require any knowledge on the solution, and it operates in an iterative strategy in which basis functions (or modes) are computed from scratch by solving eigenvalue problems.

In the classical PGD framework, the reduced model is built directly from the weak formulation (here (3)) of the considered PDE, integrated over the parametric space. The approximate reduced solution \(T^m\) at order m is then is then searched in a in a separated form with respect to space, time, and model parameters \({\mathbf {p}}=\{p_1,p_2,\dots ,p_d\}\) seen as extra-coordinates [10]:

$$\begin{aligned} T^m({\mathbf {x}},t,{\mathbf {p}})=\sum _{k=1}^{m} \Lambda _k({\mathbf {x}}) \lambda _k(t) \prod _{i=1}^{d} \alpha ^i_k(p_i) \end{aligned}$$
(5)

The computation of the PGD modal representation is performed in an offline phase by using an iterative method [10], before being evaluated in an online phase at any space-time location and any parameter value from products and sums of one-parameter functions.

For the multi-parametric problem of interest, the construction of the PGD solution is detailed in [26]. It reads:

$$\begin{aligned} T^m({\mathbf {x}},t,\sigma ,Pe)=\sum _{k=1}^{m} \Lambda _k({\mathbf {x}}) \lambda _k(t) \alpha ^1_k(\sigma ) \alpha ^2_k(Pe) \end{aligned}$$
(6)

Considering a heat source term with \(u=1\), the first four PGD modes are represented in Fig. 4 (spatial modes), Fig. 5 (parameter modes), and Fig. 6 (time modes).

Fig. 4
figure 4

First four spatial modes of the PGD solution

Fig. 5
figure 5

First four parametric modes of the PGD solution

Fig. 6
figure 6

First four time modes of the PGD solution

Real-time data assimilation with Bayesian inference and Transport Map sampling

Basics on Bayesian inference

The purpose of Bayesian inference is to characterize the posterior probability density function (pdf) \(\pi ({\mathbf {p}}|{\mathbf {d}}^\text {obs})\) of some model parameters \({\mathbf {p}}\) given some indirect and noisy observations \({\mathbf {d}}^\text {obs}\). In this context, the Bayesian formulation of the inverse problem reads [17]:

$$\begin{aligned} \pi ({\mathbf {p}}|{\mathbf {d}}^\text {obs}) = \frac{1}{C} \pi ({\mathbf {d}}^\text {obs}|{\mathbf {p}}). \pi _0({\mathbf {p}}) \end{aligned}$$
(7)

where \(\pi _0({\mathbf {p}})\) is the prior pdf, related to the a priori knowledge on the parameters before the consideration of data \({\mathbf {d}}^\text {obs}\), \(\pi ({\mathbf {d}}^\text {obs}|{\mathbf {p}})\) is the likelihood function that corresponds to the probability for the model \({\mathcal {M}}\) to predict observations \({\mathbf {d}}^\text {obs}\) given values of the parameters \({\mathbf {p}}\), and \(C= \int \pi ({\mathbf {d}}^\text {obs}|{\mathbf {p}})\cdot {\pi ({\mathbf {p}}}) \text {d} {\mathbf {p}}\) is a normalization constant. No assumption is made on the probability densities (prior, measurement noise) or on the linearity of the model.

We consider here the classical case of an additive measurement noise with density \(\pi _\text {meas}\). We also consider that there is no modeling error, even though such an error source could be easily taken into account in the Bayesian inference framework (provided quantitative information on this error source is available). The likelihood function thus reads:

$$\begin{aligned} \pi ({\mathbf {d}}^\text {obs}|{\mathbf {p}})=\pi _\text {meas}({\mathbf {d}}^\text {obs}-{\mathcal {M}}({\mathbf {p}})) \end{aligned}$$
(8)

Furthermore, when considering sequential assimilation of measurements \({\mathbf {d}}_i^{\text {obs}}\) at time steps \(t_i\), \(i \in \{1,\ldots ,N_t\}\), the Bayesian formulation is such that the prior at time \(t_i\) corresponds to the posterior at time \(t_{i-1}\):

$$\begin{aligned} \pi ({\mathbf {p}}|{\mathbf {d}}_1^{\text {obs}},\ldots , {\mathbf {d}}_i^{\text {obs}}) \propto \left( \prod _{j=1}^{i} \pi _{t_j}({\mathbf {d}}_j^{\text {obs}}|{\mathbf {p}})\right) \cdot {\pi _0({\mathbf {p}}}) ; \quad \pi _{t_j}({\mathbf {d}}_j^{\text {obs}}|{\mathbf {p}})=\pi _\text {meas} \left( {\mathbf {d}}_j^\text {obs}-{\mathcal {M}}\left( {\mathbf {p}},t_j\right) \right) \end{aligned}$$
(9)

Once the PGD approximation \(T^m({\mathbf {x}},t,{\mathbf {p}})\) is built (see “Reduced order modeling using PGD” section), an explicit formulation of the non-normalized posterior density can be derived. Indeed, owing to the observation operator \({\mathcal {O}}\), the output \({\mathbf {d}}^m({\mathbf {p}},t)={\mathcal {O}}\left( T^m({\mathbf {x}},t,{\mathbf {p}})\right) \) can be easily computed for any value of the parameter set \({\mathbf {p}}\). The non-normalized posterior density \({\overline{\pi }}\) thus reads:

$$\begin{aligned} {\overline{\pi }}\left( {\mathbf {p}}|{\mathbf {d}}_1^{\text {obs}},\ldots , {\mathbf {d}}_i^{\text {obs}}\right) = \prod _{j=1}^{i} \pi _\text {meas} \left( {\mathbf {d}}_j^\text {obs}-{\mathbf {d}}^m\left( {\mathbf {p}},t_j\right) \right) .\pi ({\mathbf {p}}) \end{aligned}$$
(10)

From the expression of \(\pi ({\mathbf {p}}|{\mathbf {d}}^\text {obs})\) (or \(\pi ({\mathbf {p}}|{\mathbf {d}}_1^{\text {obs}},\ldots , {\mathbf {d}}_i^{\text {obs}})\)), stochastic features such as means, variances, or first-order marginals on parameters may be computed. These quantities are based on large dimension integrals, and classical Monte-Carlo integration-based techniques such as Markov Chain Monte-Carlo (MCMC) require in practice to sample the posterior density a large number of times. This multiquery procedure is much time consuming and incompatible with fast computations; we thus deal with an alternative approach in the following section.

Transport Map sampling

The principle of the Transport Map strategy is to build a deterministic mapping M between a reference probability measure \(\nu _\rho \) and a target measure \(\nu _\pi \). The purpose is to find the change of variables such that:

$$\begin{aligned} \int g \text {d} \nu _\pi = \int g \circ M \text {d} \nu _\rho \end{aligned}$$
(11)

In this framework, samples drawn according to the reference density are transported to become samples drawn according to the target density (Fig. 7). For the considered inference methodology, the target density corresponds to the posterior density \(\pi ({\mathbf {p}}|{\mathbf {d}}^\text {obs})\) derived from the Bayesian formulation, while a standard normal Gaussian density may be chosen as the reference density; for more details, we refer to [29] with effective computation tools (see http://transportmaps.mit.edu).

Fig. 7
figure 7

Illustration of the Transport Map principle for sampling a target density

From the reference density \(\rho \), the purpose is thus to build the map \(M : {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) such that:

$$\begin{aligned} \nu _\pi \approx M_\sharp \nu _\rho = \rho \circ M^{-1} |\text {det} \nabla M^{-1}| \end{aligned}$$
(12)

where \(_\sharp \) denotes the push forward operator. Once the map M is found, it can be used for sampling purposes by transporting samples drawn from \(\rho \) to samples drawn from \(\pi \). Similarly, Gaussian quadrature \((\omega _i,{\mathbf {p}}_i)_{i=1}^N\) for \(\rho \) can be transported to quadrature \((\omega _i,M({\mathbf {p}}_i))_{i=1}^N\) for \(\pi \). Such a (deterministic) numerical integration with quadrature rule from the reference Gaussian density is therefore a technique of choice used in the present work for the calculation of statistics, marginals, or any other information from the posterior pdf.

Maps M are searched among Knothe–Rosenblatt rearrangements (i.e lower triangular and monotonic maps). This particular choice of structure is motivated by the following properties (see [4, 21, 29] for all details):

  • Uniqueness and existence under mild conditions on \(\nu _\pi \) and \(\nu _\rho \);

  • Easily invertible map and Jacobian \(\nabla M\) simple to evaluate;

  • Optimality regarding the weighted quadratic cost;

  • Monotonicity essentially one-dimensional (\(\partial _{p_k}M^k >0\)).

The maps M are therefore parametrized as:

$$\begin{aligned} M({\mathbf {p}})= \left[ \begin{array}{l} M^1({\mathbf {a}}_c^1,{\mathbf {a}}_e^1,p_1) \\ M^2({\mathbf {a}}_c^2,{\mathbf {a}}_e^2,p_1,p_2)\\ \vdots \\ M^d({\mathbf {a}}_c^d,{\mathbf {a}}_e^d,p_1,p_2,\ldots ,p_d) \end{array} \right] \end{aligned}$$
(13)

with \(M^k({\mathbf {a}}_c^k,{\mathbf {a}}_e^k,{\mathbf {p}})= \Phi _c({\mathbf {p}}) {\mathbf {a}}_c^k+\int _{0}^{p_k} (\Phi _e(p_1,...,p_{k-1},\theta ){\mathbf {a}}_e^k)^2 \text {d} \theta \). Functions \(\Phi _c\) and \(\Phi _e\) are chosen as Hermite polynomials with coefficients \(\mathbf {a_c}\) et \(\mathbf {a_e}\). This integrated squared parametrization is a classical choice that automatically ensures the monotonicity of the map, and using Hermite polynomials leads to an integration that can be performed analytically.

With this parametrization, the optimal map M is found by minimizing the following Kullback–Leibler (K–L) divergence:

$$\begin{aligned} \begin{aligned} {\mathcal {D}}_{KL}( M_\sharp \nu _\rho || \nu _\pi )&= {\mathbb {E}}_\rho \left[ \log \frac{\nu _\rho }{M_\sharp ^{-1} \nu _\pi }\right] \\&=\int _P \left[ \log (\rho ({\mathbf {p}}))- \log ([\pi \circ M]({\mathbf {p}})) - \log (|\det \nabla M({\mathbf {p}})|) \right] \rho ({\mathbf {p}}) \text {d} {\mathbf {p}} \end{aligned} \end{aligned}$$
(14)

that quantifies the difference between the two distributions \(\nu _\pi \) and \(M_\sharp \nu _\rho \). Still using a Gaussian quadrature rule \((\omega _i,{\mathbf {p}}_i)_{i=1}^N\) over the reference probability space associated with \(\rho \), the minimization problem reads:

$$\begin{aligned} \underset{{\mathbf {a}}_c^{1,\ldots ,d},{\mathbf {a}}_e^{1,\ldots ,d}}{\min } \sum _{i=1}^{N} \omega _i \left[ - \log ({\widetilde{\pi }} \circ M({\mathbf {a}}_c^{1,\ldots ,d},{\mathbf {a}}_e^{1,\ldots ,d},{\mathbf {p}}_i) - \log (\left| \det \nabla M({\mathbf {a}}_c^{1,\ldots ,d},{\mathbf {a}}_e^{1,\ldots ,d},{\mathbf {p}}_i))\right| ) \right] \end{aligned}$$
(15)

where \({\overline{\pi }}\) is the non-normalized version of the target density. This minimization problem is fully deterministic and may be solved using classical algorithms (such as BFGS) using gradient or Hessian information on the density \({\overline{\pi }}({\mathbf {p}})\).

It is important to notice that the reduced PGD representation (6) of the solution is highly beneficial to solve (15). Partial derivatives of the model with respect to parameters \({\mathbf {p}}\) can indeed be easily computed as:

$$\begin{aligned} \frac{\partial ^n T^m}{\partial p_j^n}({\mathbf {x}},t,{\mathbf {p}})=\sum _{k=1}^{m} \Lambda _k({\mathbf {x}}) \lambda _k(t) \frac{\partial ^n \alpha ^j_k}{\partial p_j^n}(p_j) \prod _{\begin{array}{c} i=1 \\ i \ne j \end{array}}^{d} \alpha ^i_k(p_i) \end{aligned}$$
(16)

and stored in the offline phase. Thanks to the separated representation of the PGD, cross-derivatives are computed by combination of univariate modes derivatives. As a result, the use of PGD also speeds up the computation of transport maps.

The quality of the approximation \(M_\sharp \nu _\rho \) of the measure \(\nu _\pi \) can be estimated by the convergence criterion \(\epsilon _\sigma \) (variance diagnostic) defined in [29] as:

$$\begin{aligned} \epsilon _\sigma = \frac{1}{2} {\mathbb {V}}\text {ar}_\rho \left[ \log \frac{\nu _\rho }{M_\sharp ^{-1} \nu _\pi }\right] \end{aligned}$$
(17)

The numerical cost for computing this criterion is very low as the integration is performed using the reference density and with the same quadrature rule as the one used in the computation of the K–L divergence. Therefore, an adaptive strategy regarding the order of the map can be used to derive an automatic algorithm that guarantees the quality of the approximation \(M_\sharp \nu _\rho \).

In the case of sequential inference, the Transport Map method exploits the Markov structure of the posterior density (9). Indeed, instead of being fully computed, the map between the reference density \(\rho \) and the posterior density at time \(t_i\) is obtained by composition of low-order maps (see Fig. 8):

$$\begin{aligned} \left( M_1 \circ \ldots \circ M_i \right) _\sharp \rho ({\mathbf {p}}) = \left( {\mathbb {M}}_i\right) _\sharp \rho ({\mathbf {p}}) \approx \pi ({\mathbf {p}}|{\mathbf {d}}_1^{\text {obs}},\ldots , {\mathbf {d}}_{i}^{\text {obs}}) \end{aligned}$$
(18)

Therefore, at each assimilation step \(t_i\), only the last map component \(M_i\) is computed between \(\rho \) and the density \(\pi _i^*\) defined as:

$$\begin{aligned} {\pi }^*_i({\mathbf {p}})=\pi _{t_i}({\mathbf {d}}_i^\text {obs}|{\mathbb {M}}_{i-1}({\mathbf {p}}))\cdot {\rho ({\mathbf {p}}}) \end{aligned}$$
(19)

which leads to a process with almost constant CPU effort.

Fig. 8
figure 8

Flowchart of sequential inference using transport maps (L is a normalizing linear map)

Real-time control

In addition to the mean, maximum a posteriori (MAP), or other estimates on model parameters, another major post-processing in the DDDAS feedback loop is the prediction of some quantities of interest from the model, such as the temperature \(T_3\) at remote point \({\mathbf {x}}_3\) in the present context (see Fig. 2). Once parameters \({\mathbf {p}}\) (\(\sigma \) and Pe here) are inferred in a probabilistic way at each assimilation time point \(t_i\) (\(1\le i \le N_t\)), it is indeed valuable to propagate uncertainties a posteriori in order to know their impact on the output of interest \(T_3\) during the process, and consequently to assess the welding quality.

As the PGD model gives an explicit prediction of the temperature field over the whole space-time-parametric domain, the output \(T_3\) can be easily computed for all values of the parameter samples and at each physical time point \(\tau _j\), \(j \in \{1,\ldots ,N_\tau \}\). For a given physical time point \(\tau _j\), the pdf \(\pi (T_{3|\tau _j}|{\mathbf {p}},t_i)\) of the value of the temperature \(T_3\) knowing uncertainties on the parameter set \({\mathbf {p}}\) from data assimilation up to time point \(t_i\) can thus be computed in real-time and used to determine if the plates are correctly welded and with which confidence. In practice, this computation may be performed for all physical time points \(\tau _j \ge t_i\), and the density \(\pi (T_{3|\tau _j}|{\mathbf {p}},t_i)\) is characterized by a (Gaussian) quadrature rule using the Transport Map method. With this knowledge, a stochastic computation of the predicted temperature evolution can be obtained, and the control of the welding process from the numerical model can be performed.

We detail below the procedure to dynamically determine the value of the control variable u (magnitude of the heat source) in the case where the welding objective is to satisfy a sufficient welding depth. The quantity of interest is then the maximal value of the temperature \(T_3\) obtained at final time \(\tau ^*\), which is an indicator of the welding quality. When \(T_{3|\tau ^*} \ge 1\), the welding depth is supposed to be sufficient. Other welding objectives will be considered in “Results and discussion” section, associated with similar strategies for command synthesis.

Due to the stochastic framework which is employed, the quantity of interest is actually a random variable with pdf \(\pi (T_{3|\tau ^*}|{\mathbf {p}},t_i)\) evolving at each data assimilation time \(t_i\).

The proposed quantity q to monitor is:

$$\begin{aligned} q=\text {mean}(T_{3|\tau ^*}) - 3\cdot {\text {std}}(T_{3|\tau ^*}) = {\mathcal {Q}}(T_{3|\tau ^*}) \end{aligned}$$
(20)

where \({\mathcal {Q}}\) is an operator defined in the stochastic space. This way, setting the objective \(q_\text {obj} = 1\) ensures that the temperature \(T_{3|\tau ^*}\) is larger than the melting temperature with a confidence of 99%, and using the minimal energy (no overheating).

Using the PGD solution computed in “Reduced order modeling using PGD” section for a unit magnitude of the heat source (\(u=1\)) and zero initial conditions, the predicted (stochastic) maximal value \(T_3\) for a given constant magnitude u and for fixed pdfs of \({\mathbf {p}}\) reads:

$$\begin{aligned} T_{3|\tau ^*} \approx u\cdot {T^m}({\mathbf {x}}_3,\tau ^*,{\mathbf {p}})=u\cdot {\sum _{k=1}^{m}} \Lambda _k({\mathbf {x}}_3) \lambda _k(\tau ^*) \prod _{i=1}^{d} \alpha ^i_k(p_i) \end{aligned}$$
(21)

so that \(q=u\cdot {\mathcal {Q}}\left( T^m({\mathbf {x}}_3,\tau ^*,{\mathbf {p}})\right) \) can be obtained in a straightforward manner. This way, setting the source magnitude u to \(u_0=q_\text {obj}/{\mathcal {Q}}\left( T^m({\mathbf {x}}_3,\tau ^*,{\mathbf {p}})\right) \) would enable to reach the welding objective.

Nevertheless, in practice the pdfs on parameters \({\mathbf {p}}\) are updated at each assimilation time point \(t_i\), based on additional experimental information, so that the value of u needs to be tuned with time accordingly. In order to do so, the control variable u(t) is made piecewise constant in time, under the form:

$$\begin{aligned} u(t) = u_0\cdot {H(t)} + \sum _{i=1}^{N_t}\delta u_i\cdot {H(t-t_i)} \end{aligned}$$
(22)

where H is the Heaviside function, \(u_0\) is the initial command on the source magnitude (defined from the prior pdfs on \({\mathbf {p}}\)), and \(\delta u_i\) is the correction to the current command at each assimilation time \(t_i\). Using the linearity of the problem with respect to the loading, a PGD solution associated with the command is made of a series of PGD solutions translated in time; it reads:

$$\begin{aligned} u_0\cdot {T^m}({\mathbf {x}},t,{\mathbf {p}}) + \sum _{n=1}^{N_t}\delta u_i\cdot {T^m}({\mathbf {x}},t-t_i,{\mathbf {p}}) \end{aligned}$$
(23)

Therefore, after each assimilation time point \(t_i\), the new prediction of the quantity of interest \(T_{3|\tau ^*}\) can be easily obtained from PGD:

$$\begin{aligned} \begin{aligned} T_{3|\tau ^*}&\approx u_0\cdot {T^m}({\mathbf {x}}_3,\tau ^*,{\mathbf {p}}) + \sum _{n=1}^i \delta u_n\cdot {T^m}({\mathbf {x}}_3,\tau ^*-t_n,{\mathbf {p}}) \\&= T^{pred,[0,i-1]}_{3|\tau ^*}({\mathbf {p}}) + \delta u_i\cdot {T^m}({\mathbf {x}}_3,\tau ^*-t_i,{\mathbf {p}}) \end{aligned} \end{aligned}$$
(24)

where \(T^{pred,[0,i-1]}_{3|\tau ^*}({\mathbf {p}})=u_0\cdot {T^m}({\mathbf {x}}_3,\tau ^*,{\mathbf {p}}) + \sum _{n=1}^{i-1} \delta u_n\cdot {T^m}({\mathbf {x}}_3,\tau ^*-t_n,{\mathbf {p}})\) is the prediction on \(T_{3|\tau ^*}\) considering the history of the control variable u(t) until time \(t_i\). Consequently, the correction \(\delta u_i\) is defined such that \({\mathcal {Q}}(T_{3|\tau ^*})=q_\text {obj}\), using (24) and considering the current pdfs of the parameter set \({\mathbf {p}}\) (i.e. those obtained after the last Bayesian data assimilation at time \(t_i\)).

Results and discussion

We now implement the DDDAS procedure proposed in “Methods” section on the model problem defined in “Introduction” section. We investigate three test cases involving different welding scenarios, in order to illustrate the flexibility of the approach and show its performance. For all scenarios, two temperature data \(T_1^\text {obs}\) and \(T_2^\text {obs}\) are assimilated at each assimilation time point \(t_i\) in order to refine the knowledge on parameters \(\sigma \) and Pe, and further predict the value of the quantity of interest for control purpose. Without any limitation, we assume that assimilation time points \(t_i\), \(i \in \{1, \ldots , N_t\}\), coincide with discretization time points \(\tau _j\).

Case 1: control of the welding depth with constant physical process parameters

In this first test case, the control objective is the one mentioned in “Real-time control” section, that is \({\mathcal {Q}}(T_{3|\tau ^*})=1\), with \({\mathcal {Q}}\) the operator defined in (20) and \(\tau ^*=45\). This ensures that the temperature \(T_3\) at final time \(\tau ^*\) is larger than the melting temperature with a confidence of 99%, while using the minimal source energy.

We use synthetic data, measurements being simulated using the PGD model with reference parameter values \((\sigma _{ref}=0.4,Pe_{ref}=-60)\) that are supposed to be constant in time in this section. An independent random normal noise is added with zero mean and standard deviations \(\sigma _1^\text {meas}=0.01925\) and \(\sigma _2^\text {meas}=0.01245\). Figure 9 shows the model outputs \(T_1\) and \(T_2\) at each time step as well as the perturbed outputs which provide the measurements used for the considered example, in the case where the control on the system is not activated (i.e. \(u=1\)). When this control is implemented (see “On-the-fly control of the welding process” section), synthetic data are generated by taking into account the applied control law.

Fig. 9
figure 9

Measurements simulated with the numerical model, when the control is not activated - Case 1

The goal of the test case is to perform a detailed analysis of the proposed DDDAS approach, in terms of dynamical model updating, uncertainty propagation on the quantity of interest, and on-the-fly command synthesis.

Dynamical updating of model parameters

The prior density on the parameters \((\sigma , Pe)\) is chosen as the product of two independent Gaussian densities with means \((\mu _{\sigma }=0.4,\mu _{Pe}=-60)\) and variances \((\sigma ^2_{\sigma }=0.003,\sigma ^2_{Pe}=7)\). The Transport Map strategy detailed in “Real-time data assimilation with Bayesian inference and Transport Map sampling” section and coupled with PGD is then applied for sequential data assimilation, assuming for the moment a constant magnitude \(u=1\) of the heat source. The solution of the heat equation (1) is used in its PGD form and derivatives of the approximate solution \(T^m\) with respect to the parameters to be inferred are computed in order to derive the transport maps (i.e. successive maps \(M_1, \ldots ,M_{N_t}\)) effectively. In Table 1 we represent the computation time required to compute the transport maps at each assimilation step. We compare computation times when different information on derivative orders is provided to the minimization algorithm. With order 0, the minimization problem (15) is solved using a BFGS algorithm where the gradient is computed numerically. With order 1, the minimization is also performed using a BFGS algorithm but with the gradient given explicitly with respect to the PGD modes derivatives. With order 2, a conjugate gradient algorithm is used with an explicit formulation of both gradient and Hessian. The stopping criterion is a tolerance of \(10^{-3}\) on the variance diagnostic (17), and the complexity of the maps (order of the Hermite polynomials) is increased until this tolerance is fulfilled.

It appears that the first assimilation step is the most expensive as the complexity of the transformation between the reference and the first posterior density is large (a 4th order map is required to fulfill the variance diagnostic criterion). The other transformations computed at other assimilation time steps are much less expensive (time less than 1 s) as they are built between intermediate posteriors which slightly differ at each step and can thus be easily represented by a linear (i.e. first order) transformation. The speed-up for the first iteration is about 5.5 between zeroth-order information and first-order information. Between the first-order information and the second-order information, the speed-up is about 1.34. For the other time steps, the speed-up is very small as the computed map is very simple. We observe that using gradient and Hessian information to solve the minimization problem related to the computation of the transport maps leads to low computation times.

Table 1 Computation costs of the transport maps depending on the derivatives order information given to the minimization algorithm

In Fig. 10, information on the computation cost over the time steps and using both gradient and Hessian information (order 2 information) is provided: Fig. 10a shows the computation time to build each map \(M_i\), \(i \in \{1,\ldots , N_t\}\), while the cost in terms of model evaluations to compute each map is displayed in Fig. 10b. A level 10 Gauss–Hermite quadrature is used. From the second step to the final step, we observe that the computation time slowly increases (Fig. 10a) while the evaluation cost slowly decreases (Fig. 10b). This is due to the fact that the evaluation of the composition of maps grows with the number of steps. One way to circumvent this issue would consist in performing regression on the map composition.

Fig. 10
figure 10

Cost of the transport maps computations using Hessian information for each assimilation activated - case 1

Figures 11 and 12 represent the marginals at each time step and for both parameters \(\sigma \) and Pe, respectively. The color map informs on the probability density function values. During the iterations over the time steps, we observe that marginals become thinner with larger maximal pdf values giving more confidence on the parameters estimation. We also observe that the parameter \(\sigma \) is less sensitive than the parameter Pe regarding the inference process.

After 45 assimilation time steps, the algorithm gives a maximum estimator \([0.394,-60.193]\) and a mean estimator \([0.392, -59.949]\). These values are very close to the reference values \([0.40,-60]\) used to simulate the measurements.

Fig. 11
figure 11

Marginals on \(\sigma \) computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1

Fig. 12
figure 12

Marginals on Pe computed with 20,000 samples and kernel density estimation for each assimilation time step - Case 1

Fig. 13
figure 13

Prediction of the output \(T_3\) for all time steps after the considered assimilation step - Case 1

Uncertainty propagation on the quantity of interest

Still assuming a constant magnitude \(u=1\) of the heat source, uncertainty propagation is performed in real-time in order to predict the evolution of the temperature \(T_3\) (in terms of pdf) in the region of interest. Knowing the uncertainties on the parameters, the goal is to predict at each assimilation time point the evolution of the temperature \(T_3\) during the next physical time steps. This is easily done owing to the PGD model, as the temperature field is then globally and explicitly known over the time domain and with respect to the values of \(\sigma \) and Pe. The computation is performed after each assimilation time point \(t_i\) and for all the physical time points \(\tau _j \ge t_i\).

Figure 13a shows the prediction result with uncertainty propagation after the first assimilation time point \(t_1\) for all the physical steps \(\tau _j\), \(j > 1\). To that end, samples are drawn according to the first posterior \(\pi (\sigma ,Pe|T_1^{\text {obs},1},T_2^{\text {obs},1})=\pi _{t_1}(T_1^{\text {obs},1},T_2^{\text {obs},1}|\sigma ,Pe).\pi (\sigma ,Pe)\). The slice \([\tau _0,\tau _1]\) represents the guess on the temperature \(T_3\) from the prior uncertainty knowledge on the parameters \((\sigma ,Pe)\), before the first assimilation step \(t_1\). For \(\tau _j>\tau _1\) the graph represents the prediction of the output \(T_3\) considering the current knowledge on the parameters uncertainty (i.e. with the assimilation of the first set of measurements \(T_1^{\text {obs},1}\) and \(T_2^{\text {obs},1}\) alone). The discontinuous line represents the evolution of the temperature \(T_3\) with the true value of parameters \((\sigma =0.4, Pe=-60)\).

Other graphs (Fig. 13b–d) show the refinement of the prediction with the improvement on the parameters uncertainty knowledge. The current measurement assimilation step is indicated by the vertical cursor. On the right of the cursor \(\tau =t_i\), the graphs represent the prediction of the temperature \(T_3\) from the model after the assimilation of the measurements \(T_1^{\text {obs},1:i}\) and \(T_2^{\text {obs},1:i}\). On the left of the cursor, each slice \([t_{j-1},t_j]\) (\(j\le i\)) represents the prediction made at the assimilation time \(t_j\) (the predictions of the temperature \(T_3\) for physical time steps anterior to the assimilation time step \(t_i\) are not updated).

Figure 14 shows the convergence of the prediction on the quantity of interest \(T_{3|\tau ^*}\) at the steady state regime (\(\tau ^*=45\)) with respect to the assimilation steps. We observe that, as foreseen, more confidence is given to this output along the real-time data assimilation process.

Fig. 14
figure 14

Prediction of temperature \(T_3\) at physical time step \(\tau ^*=45\) after each assimilation time step \(t_i, i\in \) \(\{1,\ldots ,45\}\) - Case 1

On-the-fly control of the welding process

The previously described assimilation procedure, performed in situ and in real-time, can be used in the context of welding control. If the stochastic prediction on the quantity of interest \(T_{3|\tau ^*}\) is not satisfying with regards to the criterion \({\mathcal {Q}}(T_{3|\tau ^*})=1\), a change in the command u(t) can be implemented as described in “Real-time control” section. This implementation is performed here.

In Fig. 15, we show the time evolution of the pdf associated with the prediction on \(T_{3|t}\), with or without control. In the case without control, the sharp time evolution is due to changes in the pdfs of \(\sigma \) and Pe along the data assimilation steps. We observe that the quantity \({\mathcal {Q}}(T_{3|\tau ^*})\) is much larger than 1, indicating overheating and wasted energy. On the contrary, implementing the control by varying the magnitude u of the heat source enables to reach the criterion \({\mathcal {Q}}(T_{3|\tau ^*})=1\) perfectly, and it also speeds up the convergence of the pdf on \(T_{3|t}\) to the target.

In Fig. 16, we indicate the evolution of the command variable along the welding process (in terms of corrections \(\delta u_i\) at each assimilation time point \(t_i\)). We again observe that the feedback loop is effective and quickly (i.e. much before the final time \(\tau ^*)\) leads to an asymptotic regime in which the command remains almost constant (i.e. \(\delta u_i \approx 0\)). We also show in Fig. 16 the map orders which are used along the data assimilation process when the control is performed. This indicates that an order 1 map is still usually sufficient, but that a few more maps with higher order are required compared to the case with no control (where only the first map was order 4). Eventually, we display in Fig. 17 the evolution in time of the overall CPU cost required to implement the feedback loop, which includes both data assimilation and command synthesis steps. As foreseen, this cost is higher during the first assimilation times when the pdfs on parameters \(\sigma \) and Pe significantly evolve (i.e. when much is learnt from measurement data). Once the asymptotic regime is reached in the model updating procedure, the CPU cost is low (< 1 s) which is compatible with real-time contraints for the considered welding application.

Fig. 15
figure 15

Evolution in time of \(T_{3|t}\) without control (left) and with control (right) - Case 1

Fig. 16
figure 16

Evolution of the command variable in terms of incremental corrections (left), and map order required at each assimilation time step in the case of system control (right) - Case 1

Fig. 17
figure 17

Computation time including the computation of the transport maps and the command synthesis - Case 1

Fig. 18
figure 18

Marginals on \(\sigma \) (left) and Pe (right) at each assimilation time time step - Case 2

Case 2: control of the welding depth with evolving physical process parameters

This second test case has many similarities with the previous one, the control objective still being \({\mathcal {Q}}(T_{3|\tau ^*})=1\). Nevertheless, we now take \(\tau ^*=100\) and we assume that the welding process experiences an unexpected change in the Peclet number value during service (e.g. due to change in the source velocity or material thermal properties), at \(t=40\). Consequently, the reference parameters values which are now used to get synthetic (noisy) data are:

$$\begin{aligned} \sigma _{ref} = 0.4; \quad Pe_{ref} = {\left\{ \begin{array}{ll} -60 &{} \text {for } t < 40 \\ -55 &{} \text {for } t \ge 40 \end{array}\right. } \end{aligned}$$
(25)

Starting from the same prior distribution of parameters as in the test case 1, sequential data assimilation using Transport Map sampling and PGD is again performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, that is a BFGS algorithm with explicit computation of the gradient from the PGD representation. The complexity of the maps (that is the degree of employed Hermite polynomials) is increased until reaching a tolerance of \(10^{-3}\) on the variance diagnostic. We represent in Fig. 18 the evolution in time of the marginals on both parameters \(\sigma \) and Pe. Again, we observe that they become thinner with larger maximal pdf values when the number of data assimilation times increases. We also observe that after the change of the reference value for Pe, the data assimilation algorithm is able to detect this change and infers a mean value that slowly tends to the new reference value (even though right after \(t=40\), the reference parameter value \(Pe_{ref}=-55\) appears in the tail of the pdf). Meanwhile, during this transient regime, it seems that no additional knowledge is brought for the inference of \(\sigma \) as the associated marginals are stagnating. We also show in Fig. 19 the map orders which are used along the data assimilation process. This particularly indicates that an order 1 map remains sufficient to follow the sudden change in the reference value for Pe.

Fig. 19
figure 19

Map order required at each assimilation time step - Case 2

From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 20 the time evolution of the pdf of \(T_{3|t}\) in the case of a controlled welding process. We observe that the control objective is reached even though pdfs of model parameters have not converged yet around the reference parameter values. This illustrates the interest of the control in a stochastic framework, in which uncertainty on the inferred parameters is taken into account in the synthesis of the command in order to make safe decision. We also plot in Fig. 21 the evolution of the command variable u(t) along the process as well as its incremental corrections \(\delta u_i\) at each time point \(t_i\); we clearly observe the change in the command when the physical value of the Peclet number drops at \(t=40\).

Fig. 20
figure 20

Evolution in time of \(T_{3|t}\) when the control is implemented - Case 2

Fig. 21
figure 21

Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process - Case 2

Case 3: control of the welding temperature evolution with prescribed time path

In this last test case, the control objective is to make the temperature \(T_{3|t}\) follow a predefined time path, which comes down to imposing the welding history along the process. We set the final time \(\tau ^*=100\) and we assume that reference parameter values are \(\sigma _{ref}=0.4\) and \(Pe_{ref}=-60\) (constant in time). Synthetic measurement data are simulated from these values, with additive measurement noise.

The prescribed evolution curve for \(T_{3|t}\) is shown in Fig. 22 (dashed red line). It is a ramp increase up to \(t=20\), then a plateau evolution. In our stochastic framework, the command law is designed so that the predicted mean value of \(T_{3|t}\) follows this target evolution. In practice, at each assimilation time point \(t_i\), and from the inferred pdfs on model parameters at this time, a command correction \(\delta u_i\) is computed so that the prediction on \(\text {mean}(T_{3|t_{i+1}})\) coincides with the target value at the next assimilation time point \(t_{i+1}\). The evolution of \(T_{3|t}\) predicted from the model with reference parameter values, and without any control, is also shown in Fig. 22 (solid black line).

Fig. 22
figure 22

Target (dashed red line) and free system (solid black line) evolution curves for \(T_{3|t}\) - Case 3

Starting from the same prior distribution of parameters as in the previous test cases, sequential data assimilation using Transport Map sampling and PGD is performed. The minimization problem associated with the computation of the maps is solved with order 1 information on the derivatives, and the complexity of the maps is increased until reaching a tolerance of \(10^{-3}\) on the variance diagnostic. We represent in Fig. 23 the evolution in time of the marginals on both parameters \(\sigma \) and Pe. As expected, we observe that they become thinner with larger maximal pdf values tending to reference parameter values along the data assimilation process. The map orders which are used along this process are shown in Fig. 24; they again indicate that an order 1 is sufficient, except for first assimilation steps where the complexity of the transformation between the reference density and the first posterior densities is higher.

Fig. 23
figure 23

Marginals on \(\sigma \) (left) and Pe (right) at each assimilation time step - Case 3

Fig. 24
figure 24

Map order required at each assimilation time step - Case 3

From the dynamical updating of model parameters and with respect to the objective, the control of the process with on-the-fly command synthesis is implemented. We show in Fig. 25 the resulting time evolution of the pdf of \(T_{3|t}\). We observe that \(\text {mean}(T_{3|t})\) quite perfectly matches the target evolution. We also plot in Fig. 26 the evolution of the command variable u(t) along the process as well as its incremental corrections \(\delta u_i\) at each time point \(t_i\). We observe that during the transient phase (ramp evolution of the target), fast modifications in the command are required while command increments tend to zero once the steady-state target regime is reached. Anyhow, this test case shows that the proposed DDDAS strategy is capable of generating complex and effective command laws.

Fig. 25
figure 25

Evolution in time of \(T_{3|t}\) when the control is implemented- Case 3

Fig. 26
figure 26

Evolution of the command variable (left) and its incremental corrections (right) along the controlled welding process- Case 3

Conclusions

In this work we presented a procedure to build a numerical feedback loop for the control of a fusion welding process from modeling and simulation, while taking uncertainties into account. In order to perform fast computations and permit real-time exchanges between the physical system and its virtual twin, PGD model reduction and Transport Map sampling were used in several numerical tasks along the feedback loop. In particular, the explicit dependency on the model parameters inside the PGD model as well as the suitable sampling and integration framework offered by transport maps enabled to effectively perform data assimilation, uncertainty quantification, and predictive control. The implementation of the feedback loop for various control scenarios illustrated the interest and performance of the proposed approach. This approach thus appears to be a relevant tool for real-time feedback control in the DDDAS framework. Future works should focus on the extension of the approach to more complex (e.g. nonlinear) models, associated with modeling errors that may be a priori considered in the Bayesian framework but also a posteriori corrected from data-based learning and enrichment. Dealing with a larger number of model parameters and control variables in the DDDAS context is also a research topic of interest that will be investigated in forthcoming works.

Availability of data and material

The datasets used during the current study are available from the corresponding author on reasonable request. The interested reader is thus invited to contact the corresponding author.

References

  1. Arulampalam MS, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing. 2002;50(2):174–88.

    Article  Google Scholar 

  2. Beck JL. Bayesian system identification based on probability logic. Structural Control and Health Monitoring. 2010;17(7):825–47.

    Article  Google Scholar 

  3. Berger J, Orlande HRB, Mendes N. Proper Generalized Decomposition model reduction in the Bayesian framework for solving inverse heat transfer problems. Inverse Problems in Science and Engineering. 2017;25(2):260–78.

    Article  MathSciNet  Google Scholar 

  4. Bogachev VI, Kolesnikov AV, Medvedev KV. Triangular transformations of measures, Sbornik:Mathematics 2005;196:309.

  5. Bouclier R, Louf F, Chamoin L. Real-time validation of mechanical models coupling PGD and constitutive relation error. Computational Mechanics. 2013;52(4):861–83.

    Article  MathSciNet  Google Scholar 

  6. Calvetti D, Dunlop M, Somersalo E, Stuart A. Iterative updating of model error for Bayesian inversion, Inverse Problems 2018;34(2).

  7. Chamoin L, Allier PE, Marchand B, Synergies between the Constitutive Relation Error concept and PGD model reduction for simplified V&V procedures, Advanced Modeling and Simulation in Engineering Sciences 2016;3:18.

  8. Chamoin L, Pled F, Allier PE, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Computer Methods in Applied Mechanics and Engineering. 2017;327:118–46.

    Article  MathSciNet  Google Scholar 

  9. Chinesta F, Ladevèze P, Cueto E. A short review on model order reduction based on Proper Generalized Decomposition. Archives of Computational Methods in Engineering. 2011;18(4):395–404.

    Article  Google Scholar 

  10. Chinesta F, Keunings R, Leygue A. The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer, SpringerBriefs in Applied Sciences and Technology 2014.

  11. Chinesta F, Cueto E, Abisset-Chavanne E, Duval J-L, Khaldi FE. Virtual, digital and hybrid twins: A new paradigm in data-based engineering and engineered data. Archives of Computational Methods in Engineering. 2020;27:105–34.

    Article  MathSciNet  Google Scholar 

  12. Darema F. Dynamic Data Driven Applications Systems: A new paradigm for application simulations and measurements, Computational Science - ICCS: 2004;662–669.

  13. El Moselhy TA, Marzouk Y. Bayesian inference with optimal maps. Journal of Computational Physics. 2012;231(23):7815–50.

    Article  MathSciNet  Google Scholar 

  14. Gamerman D, Lopes HF. Markov Chain Monte Carlo-Stochastic Simulation for Bayesian Inference. : CRC Press; 2006.

  15. Gonzalez D, Masson F, Poulhaon F, Leygue A, Cueto E, Chinesta F. Proper generalized decomposition based dynamic data driven inverse identification. Mathematics and Computers in Simulation. 2012;82(9):1677–95.

    Article  MathSciNet  Google Scholar 

  16. Grepl M. Reduced-Basis Approximation and A Posteriori Error Estimation, PhD Thesis. 2005.

  17. Kaipio J, Somersalo E. Statistical and Computational Inverse Problems. New York: Springer-Verlag; 2004.

    MATH  Google Scholar 

  18. Ladevèze P. On reduced models in nonlinear solid mechanics. European Journal of Mechanics - A/Solids. 2016;60:227–37.

    Article  MathSciNet  Google Scholar 

  19. Manzoni A, Pagani S, Lassila T. Accurate Solution of Bayesian Inverse Uncertainty Quantification Problems Combining Reduced Basis Methods and Reduction Error Models. SIAM/ASA Journal on Uncertainty Quantification. 2016;4(1):380–412.

    Article  MathSciNet  Google Scholar 

  20. Marchand B, Chamoin L, Rey C. Real-time updating of structural mechanics models using Kalman filtering, modified Constitutive Relation Error and Proper Generalized Decomposition. International Journal for Numerical Methods in Engineering. 2016;107(9):786–810.

    Article  MathSciNet  Google Scholar 

  21. Marzouk Y, Moselhy T, Parno M, Spantini A. Sampling via measure transport: an introduction, Handbook of Uncertainty Quantification, 2016;1–41.

  22. Matthies HG, Zander E, Rosic BV, Litvinenko A, Pajonk O. Inverse problems in a Bayesian setting. Computational Methods for Solids and Fluids. 2016;41:245–86.

    Article  Google Scholar 

  23. Parno MD, Marzouk YM. Transport map accelerated Markov Chain Monte-Carlo. SIAM-ASA Journal on Uncertainty Quantification. 2018;6(2):645–82.

    Article  MathSciNet  Google Scholar 

  24. Peherstorfer B, Willcox K. Dynamic data-driven reduced-order models. Computer Methods in Applied Mechanics and Engineering. 2015;291:21–41.

    Article  MathSciNet  Google Scholar 

  25. Robert CP, Casella G. Monte Carlo Statistical Methods. New York: Springer Texts in Statistics; 2004.

    Book  Google Scholar 

  26. Rubio PB, Louf F, Chamoin L. Fast model updating coupling Bayesian inference and PGD model reduction. Computational Mechanics. 2018;62(6):1485–509.

    Article  MathSciNet  Google Scholar 

  27. Rubio PB, Louf F, Chamoin L. Transport Map sampling with PGD model reduction for fast dynamical Bayesian data assimilation. International Journal in Numerical Methods in Engineering. 2019;120(4):447–72.

    Article  MathSciNet  Google Scholar 

  28. Rubio PB, Chamoin L, Louf F. Real-time Bayesian data assimilation with data selection, correction of model bias, and on-the-fly uncertainty propagation, Comptes Rendus Mécanique. Paris. 2019;347:762–79.

    Google Scholar 

  29. Spantini A, Bigoni D, Marzouk Y. Inference via low-dimensional couplings. Journal of Machine Learning Research. 2018;19:1–71.

    MathSciNet  MATH  Google Scholar 

  30. Stuart AM. Inverse problems: a Bayesian perspective. Acta Numerica. 2010;19:451–559.

    Article  MathSciNet  Google Scholar 

  31. Tarantola A. Inverse Problem Theory and Methods for Model Parameter Estimation, Society for Industrial and Applied Mathematics 2005.

Download references

Funding

No specific funding has to be declared for this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors discussed the content of the article, and were involved in the definition of techniques and algorithms. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ludovic Chamoin.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rubio, PB., Chamoin, L. & Louf, F. Real-time data assimilation and control on mechanical systems under uncertainties. Adv. Model. and Simul. in Eng. Sci. 8, 4 (2021). https://doi.org/10.1186/s40323-021-00188-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-021-00188-3

Keywords