Skip to main content
  • Research article
  • Open access
  • Published:

Bayesian calibration of coupled computational mechanics models under uncertainty based on interface deformation

Abstract

Calibration or parameter identification is used with computational mechanics models related to observed data of the modeled process to find model parameters such that good similarity between model prediction and observation is achieved. We present a Bayesian calibration approach for surface coupled problems in computational mechanics based on measured deformation of an interface when no displacement data of material points is available. The interpretation of such a calibration problem as a statistical inference problem, in contrast to deterministic model calibration, is computationally more robust and allows the analyst to find a posterior distribution over possible solutions rather than a single point estimate. The proposed framework also enables the consideration of unavoidable uncertainties that are present in every experiment and are expected to play an important role in the model calibration process. To mitigate the computational costs of expensive forward model evaluations, we propose to learn the log-likelihood function from a controllable amount of parallel simulation runs using Gaussian process regression. We introduce and specifically study the effect of three different discrepancy measures for deformed interfaces between reference data and simulation. We show that a statistically based discrepancy measure results in the most expressive posterior distribution. We further apply the approach to numerical examples in higher model parameter dimensions and interpret the resulting posterior under uncertainty. In the examples, we investigate coupled multi-physics models of fluid–structure interaction effects in biofilms and find that the model parameters affect the results in a coupled manner.

Introduction

In this article we present a robust approach for Bayesian calibration [1, 2] of coupled computational mechanical models based on the deformation of an interface or boundary. In the most general case, the search for a set of parameters leading to a desired model result can be understood as an inverse problem [3]. Basic elements are a computational mechanics model \(\mathfrak {M}\), also called the forward model, with model parameters \({\varvec{x}}\) that are considered as inputs to the forward problem. The model parameters can be prescribed in the model and they are expected to significantly influence the associated model response. The inverse problem is characterized by the task to find one or multiple model parameters that result in a desired model behavior. In this paper, we are interested in a special case of desired model behavior which is given in form of observed experimental data. Here, we want to find suitable model parameters such that the model response is close to the experimental data in a given metric. This category of inverse problems is also known as a calibration or parameter identification problem.

We want to distinguish two different viewpoints on the calibration problem. We refer to the first one as the deterministic calibration approach, which poses the calibration task as an optimization problem by minimizing a discrepancy function over the forward model parameters, between the forward system response and the experimental data. Following the deterministic optimization approach, the problems are often ill-posed. The second viewpoint is the probabilistic Bayesian calibration approach, that we follow in this article. In contrast to the aforementioned optimization, the Bayesian approach adopts a statistical viewpoint. It seeks the posterior probability density for the input parameters. This density quantifies the probability of resulting forward model system outputs to match the experimental data best, in a specified norm. Instead of a single point estimate as in the optimization problem, the posterior distribution returns the unique probability density for all inputs in the input space. This allows to answer a plethora of additional research questions. A Bayesian, statistical viewpoint does not only provide a powerful mathematical framework for the formulation of the inverse problem but also helps with the design and interpretation of very flexible discrepancy measures between simulation output and experimental observation. Those can be formulated in form of reproducing kernel Hilbert space (RKHS) norms as demonstrated in this article. The Bayesian setting allows for the incorporation of available prior knowledge which is especially advantageous in the small data regime, induced by expensive simulation runs or limited experimental data. The Bayesian formulation provides a consistent mathematical framework that can naturally deal with different sources of uncertainty such as they might arise from partially unknown experimental conditions, as also studied in this work.

The scenario where a mechanical structure changes its shape under mechanical load, but no displacements of individual material points can be determined is the focus of the presented approach. Such scenarios appear when only the shape and changes in shape of a structure can be observed, e.g., in the form of image data. In that case only information about the shape of a boundary or interface is reliably accessible, without further details on correspondence of material points in simulation and experimental data. For such scenarios that have been studied before [4] for, e.g., cardiac mechanics [5] or arterial growth [6], we want to investigate and discuss the effect of different definitions for discrepancy measures between simulated and observed interface deformations of objects. Especially regarding bio-materials the main interest is to determine material properties as they act in-situ, i.e., in the natural environment. Traditional material testing often defines standardized testing methods where the specimen must be isolated and installed in a specific testing device. For very sensitive materials the isolation of a specimen can already change the properties of the material of interest significantly. Often such scenarios occur with coupled physics as, e.g., fluid–structure interaction (FSI) problems, as an isolation of the specimen would interfere with the coupling. That is why a comparison between deformations in the computational model and an observed deformation in an in-situ experiment is required to test such sensitive materials.

While the presented approach will be useful for any kind of (coupled) mechanical model, our focus will be on the particularly challenging problem class of fluid–structure interaction (FSI). A specific motivation for us is our research on biofilms and respective experiments conducted with such biofilms. Biofilms grow with aggregates of microorganisms that form a structure of extracellular polymeric substances to withstand environmental influences. This is also known as the biofilm matrix [7]. Amongst others due to its soft consistency, the determination of mechanical properties of biofilms is an open field of research and different intrusive and non-intrusive attempts have been made to quantify the material behavior [8,9,10]. The better understanding and analysis of biofilm material properties is essential to better explain biofilm behavior and to enable the development of reliable and predictive computational modeling. Well parameterized mechanical models enable engineers to make valid predictions of biofilm behavior, e.g., deformation, growth or erosion and use those to develop biofilm-prone systems. This means to either avoid invasive biofilms or improve productive biofilm systems. Biofilms usually develop on surfaces exposed to fluid flows and therefore a mechanical model must include the FSI between biofilm surface and the fluid. As the capability of computational mechanical models of fluid–solid interaction increases, the inclusion of such models in deterministic inverse analysis of biofilms has emerged in recent years [11, 12]. One of the best approaches to acquire image data of biofilms is via optical coherence tomography (OCT) in flow cell experiments, as used in, e.g., [11, 13, 14]. This type of experiments is favorable in means of mechanical testing as it is non-destructive and the biofilm can be kept in the same environment for the whole cultivation and test process. Recent advances in automated biofilm cultivation and design of flow cell experiments [15] have already shown that a variety in biofilm shapes is inevitable even for reproducible environmental conditions and therefore a flexible method of comparing biofilm shapes is required for conclusive inverse analysis. Recently, Bayesian estimation and uncertainty quantification (UQ) have been used for models of urea hydrolysis by biofilms [16].

The rest of the article is structured as follows. First, the theoretical concepts of the Bayesian approach for an efficient model calibration under uncertainty will be presented. The technical details and realization of our approach are then outlined. Eventually, we demonstrate and discuss the calibration procedure of numerical models in examples of biofilm motivated fluid-solid interaction models for generated data in two to six input dimensions. Results and key aspects of the workflow are then concluded.

Continuous formulation of the Bayesian calibration problem under uncertainty

Our Bayesian calibration approach is based on the continuous formulation of Bayes’ rule. Therefore first the general formulation of the Bayesian calibration is introduced and subsequently extended to include the effects of uncertainties. As a necessary step, the formulation of a likelihood model is described and specific choices for the measure of discrepancy between interface shapes are introduced.

Standard formulation for Bayesian calibration

We assume that we have a computational forward model \(\mathfrak {M}\left( {\varvec{x}},C\right) \) for a real-world process, i.e., in our case a single- or multi-field continuum mechanical problem, whose response depends on the choice of inputs \({\varvec{x}}\) and the choice of (e.g., spatio-temporal) coordinates \(C\). In general, observations are compared in more than one location and point in time and therefore the coordinates are written as a matrix \(C\) with vector entries for each comparison. Consequently, the observations for all coordinates are a matrix \(Y_{\textrm{obs},C}\) as well with vector entries for each coordinate vector. In our case for the comparison of interface deformations, the observations are locations of the interface that define its shape for one or more points in time. The input parameters \({\varvec{x}}\) are the quantities of interest in the calibration process. The vector \({\varvec{x}}\) denotes a collection of model parameters that are subject to calibration and might represent, e.g., material parameters, boundary or initial conditions. The observed data \(Y_{\textrm{obs},C}\) might additionally be subject to unknown measurement noise.

In the Bayesian framework, the calibration problem is interpreted as an update of a prior belief about the parameters, encoded by a prior density \(p\left( {\varvec{x}}\right) \), by a so-called likelihood model \(p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) \). The likelihood model expresses the probability density to observe the experimental data \(Y_{\textrm{obs},C}\) at coordinate \(C\) given a specific choice of model with input \({\varvec{x}}\) evaluated at the same coordinates \(C\). As the expression \(p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) \) is only a valid density in \(Y_{\textrm{obs},C}\) but not in the model inputs \({\varvec{x}}\) one mostly refers to it as the so-called likelihood function. The likelihood function relates the output \(\mathfrak {M}\left( {\varvec{x}},C\right) \) of the computational model \(\mathfrak {M}\) with the observation \(Y_{\textrm{obs},C}\), given a specific choice of parameters \({\varvec{x}}\). The value of \(p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) \) at a specific \({\varvec{x}}\) can be interpreted as the probability density of the observations \(Y_{\textrm{obs},C}\) for the given model choice \(\mathfrak {M}\left( {\varvec{x}},C\right) \) or as a score value for the parameter choice \({\varvec{x}}\). The product of the likelihood function and the prior can be evaluated point-wise in the parameter space \(\Omega _{{\varvec{x}}}\), yielding a function that we call the unnormalised posterior \(p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) p\left( {\varvec{x}}\right) \). Normalizing this expression by \(\int p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) p\left( {\varvec{x}}\right) \textrm{d}{\varvec{x}}\) such that we get a valid density that integrates to one, yields the posterior distribution \(p\left( {\varvec{x}}|Y_{\textrm{obs},C}\right) \). The posterior distribution can be interpreted as an updated prior distribution, after the knowledge of the experimental data \(Y_{\textrm{obs},C}\) has been incorporated and related to the forward model \(\mathfrak {M}\left( {\varvec{x}},C\right) \).

An advantage of the Bayesian viewpoint on calibration is the possibility to encode prior knowledge of the unknown parameters or inputs \({\varvec{x}}\) in the so-called prior distribution \(p\left( {\varvec{x}}\right) \). Prior knowledge is information about the parameters that is available before seeing the data. Often, at least a vague understanding about which values are possible or realistic is available (e.g., the Young’s modulus must be positive \(E> 0\,\textrm{Pa} \)). This knowledge and additional valuable expert knowledge can be incorporated as prior and therefore the solution of the calibration complies with it.

We can then pose the calibration problem as a Bayesian inference task by applying Bayes’ rule [17]

$$\begin{aligned} \underbrace{p\left( {\varvec{x}}|Y_{\textrm{obs},C}\right) }_{\text {posterior}} = \frac{\overbrace{p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) }^{\text {likelihood}}\overbrace{p\left( {\varvec{x}}\right) }^{\text {prior}}}{\underbrace{\int p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) p\left( {\varvec{x}}\right) \textrm{d}{\varvec{x}}}_{\text {evidence}}}\propto \underbrace{p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) p\left( {\varvec{x}}\right) }_{\text {unnormalized posterior}}. \end{aligned}$$
(1)

In (1), the posterior \(p\left( {\varvec{x}}|Y_{\textrm{obs},C}\right) \) represents the unique solution of the Bayesian calibration task in the form of a probability density. It is a probability distribution over the input parameters \({\varvec{x}}\) assigning a probability density to each input vector of how well the forward model \(\mathfrak {M}\left( {\varvec{x}},C\right) \) evaluated with that model parameter combination represents the observation. The interest of the analyst is to find high posterior values and learn for which inputs they occur. The posterior density is usually not known in closed form, due to the implicit dependency on \({\varvec{x}}\) in the forward model \(\mathfrak {M}\left( {\varvec{x}},C\right) \) within the likelihood function. Nevertheless, the posterior density can be evaluated point-wise, which implies a forward simulation run for the particular choice of \({\varvec{x}}\).

For computationally expensive models, a grid-based evaluation of the posterior in the entire input space \(\Omega _{{\varvec{x}}}\) is unfeasible. Due to the curse of dimensionality, this problem becomes especially amplified for \(\dim {({\varvec{x}})} \gg 1\). The numerical approximation of the posterior is hence usually conducted using more advanced algorithms which aim to exploit regions in the input space with high posterior density. In this work, we use the sequential Monte Carlo (SMC) method for this purpose but postpone a more detailed discussion of that algorithm and its numerical realization to the dedicated sections, to first focus on the continuous presentation of the Bayesian calibration problem.

The denominator \(\int p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}},C\right) \right) p\left( {\varvec{x}}\right) \textrm{d}{\varvec{x}}\), respectively evidence in (1) acts as a normalizing constant for the posterior, such that it becomes a valid density function in \({\varvec{x}}\) which integrates to one on \(\Omega _{{\varvec{x}}}\). Nevertheless, the evidence is mostly not computed explicitly, as it involves potentially high dimensional integration over \(\Omega _{{\varvec{x}}}\). Consequently, most numerical algorithms operate on the unnormalized posterior or its logarithm and take care of the normalization in an easier to compute post-processing step. Given the posterior distribution or a numerical representation in form of samples or an approximating distribution, further statistics or point estimates can be calculated and derived.

Remark 1

(Marginalization) Sometimes one might be interested in the density over a subset of variables, averaging over the remaining parameters. This can be achieved by so-called marginalization. A marginal represent a projection of the high-dimensional density, e.g., \(p\left( {\varvec{x}}_i,{\varvec{x}}_j\right) \), for a selected subset of parameters \({\varvec{x}}_i\), and reflects the average effect of all other parameters \({\varvec{x}}_j\) on the density over parameter subset \({\varvec{x}}_i\). The marginal distribution is then expressed by the following integration

$$\begin{aligned} p\left( {\varvec{x}}_i\right) = \int p\left( {\varvec{x}}_i,{\varvec{x}}_j\right) \textrm{d}{\varvec{x}}_j = \int p\left( {\varvec{x}}_i|{\varvec{x}}_j\right) p\left( {\varvec{x}}_j\right) \textrm{d}{\varvec{x}}_j = {\mathbb {E}}_{{\varvec{x}}_j}\left[ p\left( {\varvec{x}}_i|{\varvec{x}}_j\right) \right] . \end{aligned}$$
(2)

One example are projections of a higher-dimensional posterior \(p\left( {\varvec{x}}_i,{\varvec{x}}_j| Y_{\textrm{obs},C}\right) \) to a marginal posterior \(p\left( {\varvec{x}}_i| Y_{\textrm{obs},C}\right) \) such that the analyst can investigate the posterior in \({\varvec{x}}_i\) averaged over the effect of \({\varvec{x}}_j\). This enables us to regard and print marginal posterior distributions with \(i < 3 \) in result plots.

Bayesian calibration under uncertainty

After the basic concepts of Bayesian calibration have been presented, we want to extend the ideas to the case of non-controllable, uncertain conditions that influence the model \(\mathfrak {M}\). Those are summarized in the vector \(\varvec{\theta }\). We assume that \(\varvec{\theta }\) is not part of the model input variables \({\varvec{x}}\) that we want to calibrate. Instead, it represents inherently uncertain external conditions, e.g., of the experimental set-up, that we cannot fully control at the time of the analysis. Furthermore, we assume that these conditions are subject to uncertainties expressed by a distribution \(p\left( \varvec{\theta }\right) \) and that the computational forward model is also dependent on \(\varvec{\theta }\) in the sense of \(\mathfrak {M}\left( {\varvec{x}},\varvec{\theta },C\right) \).

The calibration problem under uncertainty can then be formulated in two steps [18, 19]. First, we naively compose a Bayesian calibration problem in analogy to (1), with the only difference that the model is also dependent on \(\varvec{\theta }\). Consequently, the posterior \(p\left( {\varvec{x}}|\varvec{\theta },Y_{\textrm{obs},C}\right) \) is conditionally dependent on \(\varvec{\theta }\), as demonstrated in (3a). In a second step, we can then average over the effect of the uncertain conditions \(\varvec{\theta }\) by taking the expectation of the previous posterior \({\mathbb {E}}_{\varvec{\theta }}\left[ p\left( {\varvec{x}}|\varvec{\theta },Y_{\textrm{obs},C}\right) \right] \) with respect to the density \(p\left( \varvec{\theta }\right) \) as shown in (3b). The resulting modified posterior, which accounts for the average effect of the additional uncertainty introduced by \(\varvec{\theta }\) is then denoted by \(q\left( {\varvec{x}}|Y_{\textrm{obs},C}\right) \) and is different from the former posterior \(p\left( {\varvec{x}}|Y_{\textrm{obs},C}\right) \) which did not incorporate these additional uncertainties.

(3a)
(3b)

Another interpretation of (3b) is the marginalization of the extended posterior \(p\left( {\varvec{x}},\varvec{\theta }|Y_{\textrm{obs},C}\right) \) with respect to the uncertain conditions \(\varvec{\theta }\). Later-on, we show that the sequential Monte Carlo (SMC) algorithm allows simple operations on the unnormalized extended posterior, analogous to the unnormalized posterior from (1) if no additional uncertainties are present. SMC offers the possibility to conduct the necessary marginalization of \(\varvec{\theta }\) as a cheap post-processing step.

Remark 2

(Point estimates and moments of the posterior) Given a potentially high dimensional and complex posterior density, there is often the desire to represent its characteristics by simpler, e.g., scalar quantities. The most intuitive approach, especially coming from the mindset of deterministic optimization, is to look for the maximum a posteriori (MAP) estimate. It represents the combination of parameters that lead to the highest posterior density. Further, the maximum likelihood estimate (ML) is interesting to isolate the model feedback from prior assumptions. It is analog to the MAP for the assumptions of uniform priors. Also statistics of the posterior as the posterior mean (PM) and variance can be used for its simplified quantification. Other than point estimates, also a region of values with highest posterior density can be determined. This region is then characterized by holding a certain probability mass fraction of the posterior and is often called percentile.

Selecting a likelihood model

The evaluation of the likelihood function represents the computationally expensive part of the Bayesian calibration as a forward simulation run has to be conducted for every evaluation of the likelihood with respect to \(\mathfrak {M}\left( {\varvec{x}},\varvec{\theta },C\right) \). The likelihood is a probabilistic model for the discrepancy \(\mathfrak {D}\) between experimental data \(Y_{\textrm{obs},C}\) and forward model output \(\mathfrak {M}\left( {\varvec{x}},\varvec{\theta },C\right) \), written as \(\mathfrak {D}=\mathfrak {D}(Y_{\textrm{obs},C}, \mathfrak {M}\left( {\varvec{x}}, \varvec{\theta },C\right) )\). It usually models the statistical behavior of experimental noise and modeling errors. It returns the probability density of the experimental data for a given choice of simulation model. This means that it is centered around the forward model results. Usually, only one experiment is observed.

While many different likelihood models exist [20, 21], we chose the common case of a conditionally independent Gaussian likelihood model with static noise-variance \(\sigma _{\textrm{N}}^2\). This choice implies that we assume that the scattering of the measured data \(Y_{\textrm{obs},C}\) can be well-explained by a normal distribution which is centered around the simulation output \(Y_{C}\) with variance \(\sigma _{\textrm{N}}^2\). Here conditional independence means that the measured noise in \(y_{\text {obs},c_1}\) does not influence the noise in \(y_{\text {obs},c_2}\), with \(c_1\) and \(c_2\) being two different coordinates.

The conditionally independent Gaussian static likelihood model is given by

$$\begin{aligned} p\left( Y_{\textrm{obs},C}|\mathfrak {M}\left( {\varvec{x}}, \varvec{\theta }, C\right) \right) = \frac{1}{\sqrt{\left( 2 \pi \sigma _{\textrm{N}}^2\right) ^n}} \textrm{exp}\left( -\frac{\mathfrak {D}^2\left( \mathfrak {M}\left( {\varvec{x}},\varvec{\theta },C\right) ,Y_{\textrm{obs},C}\right) }{2 \sigma _{\textrm{N}}^2}\right) , \end{aligned}$$
(4)

wherein n is the dimension of the measurement \(Y_{\textrm{obs},C}\). In the simplest case it is the total number of individual measurements. As most inverse iterators operate on the logarithm of the densities for better numerical condition, we also provide the logarithmic version of (4), which is often abbreviated by \(\mathcal {L}({\varvec{x}},\varvec{\theta })\) resulting in

$$\begin{aligned} \mathcal {L}({\varvec{x}},\varvec{\theta }) = - \frac{n}{2}\log (2\pi \sigma _{\textrm{N}}^2) - \frac{\mathfrak {D}^2}{2\sigma _{\textrm{N}}^2}. \end{aligned}$$
(5)

Discrepancy measures between interfaces

In selecting a likelihood model the immediate question arises about how to define a discrepancy measure \(\mathfrak {D}\) between the forward model and experimental results. This becomes especially intricate when the experimental images do not provide real displacements for individual material points, i.e., in this sense prohibit a point-wise correspondence of the geometrical objects. This constraint almost only allows for comparisons of edges, boundaries and interfaces in the image data that can usually be identified depending on their contrast. This step is called segmentation of the image data. More detailed information on segmentation approaches etc. are out of scope for this work. As a result of the segmentation process one obtains a geometric representation of interfaces between different subdomains in the images.

In the following, we present three different discrepancy measures that can be used when working with such kind of image data, namely Euclidean distance at measurement points, closest point projection distance and reproducing kernel Hilbert space norm. Figure 1 shows sketches of the discussed distance measure definitions in 2D. The sketches in Fig. 1 are related to the type of problem, of fluid–biofilm interaction, exemplarily analyzed in the numerical examples. The methods are not bound to this type of problem and can be applied to different topologies or isolated shape characteristics.

Fig. 1
figure 1

Exemplary 2D sketches for different types of discrepancy measures between experiment (green) and discretized forward model (black). Portions of the observed interface that are accounted for are given in green and unconsidered portions are given in gray. a Euclidean distances \(d_\textrm{mp}^i\) at measurement points in preset directions in light blue, b closest point projection distances \( d_\textrm{cpp}^i \) for all nodes in light blue, c triangulation centers \(\varvec{c}_i\) as crosses and normal vectors \(\varvec{n}_i \) of equal length as arrows for triangulations of the experimentally observed interface in green and the forward model result in blue. Detailed explanation in text below

Euclidean distance at measurement points The first approach is an Euclidean distance measure as presented in [12]. At several selected points on the experimentally observed biofilm surface, specific directions are chosen and the distance to the corresponding deformed interface resulting from the forward model simulation is measured. As discussed in [12] measurement points should be selected in regions of significant, characteristic displacements. Measurement directions should be chosen normal to the observed interface. (See Fig. 1a with predefined directions in blue.) These distances are summarized in a vector of length \({n_\mathrm {\textrm{mp}}}\). They represent the distance measure between the forward model evaluation and the experimental observation. The resulting distance measure \(\mathfrak {D}_\textrm{mp}\) is then defined via the \(\text {L}_2\)-norm of the distance vector

$$\begin{aligned} \mathfrak {D}_\textrm{mp}= \left\| \begin{bmatrix} d_\textrm{mp}^1 \\ \vdots \\ d^{n_\mathrm {\textrm{mp}}}_\textrm{mp}\end{bmatrix}\right\| _{2}. \end{aligned}$$
(6)

This procedure is especially well suited if the observed experimental measurements have different reliability throughout the regarded interface, e.g., due to the particular physics or imaging peculiarities as only a point-wise and no full representation of the interface must be measured (i.e., the gray parts in Fig. 1a are not part of the analysis). By a meaningful selection of measurement points, where significant deformation takes place and the data is trusted, the analyst has direct control over which data is being processed.

Closest point projection Another approach for a distance between curves or surfaces is the closest point projection distance for a selected number of points on the interface in the discretized forward model. In the case of finite element models, a suitable choice for the points are the Gauss points or the mesh nodes, as pictured and used here, on the regarded interface. The closest point projection distance to the experimental result is determined based on the segmented image of the interface, as shown in Fig. 1b, and used in the distance vector with the length of the number of interface nodes \({n_\textrm{in}}\). Afterwards, we define the \(\text {L}_2\)-norm of the distance vector as the distance measure \(\mathfrak {D}_\textrm{cpp}\) with

$$\begin{aligned} \mathfrak {D}_\textrm{cpp}= \left\| \begin{bmatrix}d_\textrm{cpp}^1 \\ \vdots \\ d^{n_\textrm{in}}_\textrm{cpp}\end{bmatrix}\right\| _{2}. \end{aligned}$$
(7)

We choose the discretization of the finite element model and define the distance vector then by computing the distance of the closest point projection w.r.t. the experimental surface or interface.

Inner product in reproducing kernel Hilbert space (RKHS) A third, statistically motivated discrepancy measure is formulated as a reproducing kernel Hilbert space (RKHS) norm. It does not only take into account the location of discretization points, but also the orientation of the surface elements in space. Before presenting our specific variant, a more general mathematical foundation of the measure is outlined in the following.

Distance measures \(\mathfrak {D}\) between two curves (or surfaces) \(\varvec{f}_1,\varvec{f}_2\), can be elegantly defined by an inner product of the distance function \(\varvec{d_f}(\varvec{s})=\varvec{f}_1-\varvec{f}_2\) in an associated Hilbert space \(\mathcal {H}\) as demonstrated in (8a). Simply speaking, a Hilbert space is a vector space and can be seen as the natural extension of the Euclidean space to arbitrary dimensions. The inner product directly induces a norm according to (8b).

$$\begin{aligned} \langle \varvec{d_f}(\varvec{s}),\varvec{d_f}(\varvec{s})\rangle _{\mathcal {H}}&=\int \varvec{d_f}(\varvec{s})\varvec{w}(\varvec{s})\varvec{d_f}(\varvec{s})d\varvec{s} \end{aligned}$$
(8a)
$$\begin{aligned} \mathfrak {D}_{\mathcal {H}}&=\left\| \varvec{d_f}(\varvec{s})\right\| _{\mathcal {H}}=\sqrt{\langle \varvec{d_f}(\varvec{s}),\varvec{d_f}(\varvec{s})\rangle _{\mathcal {H}}} \end{aligned}$$
(8b)

Here, \(\varvec{w}(\varvec{s})>0\) is an optional weighting function. A special case of a Hilbert space \(\mathcal {H}\) is the so-called reproducing kernel Hilbert space (RKHS), here denoted by \(\mathcal {V}\). We omit a full description of RKHSs and only present the most important concepts in this work but point the interested reader to [22] for a comprehensive derivation and description of the latter. Some further details of the RKHS framework are presented in the appendix. An RKHS \(\mathcal {V}\) is a Hilbert space that is equipped with a reproducing kernel \(\varvec{k}(\varvec{x},\varvec{y})\) and the inner product

$$\begin{aligned} \langle \varvec{d_f}(\varvec{x}),\varvec{d_f}(\varvec{x})\rangle _{\mathcal {V}}&=\int \varvec{d_f}(\varvec{x})\varvec{k}(\varvec{x},\varvec{x}')\varvec{d_f}(\varvec{x}')d\varvec{x}d\varvec{x}' \end{aligned}$$
(9a)
$$\begin{aligned} \mathfrak {D}_{\mathcal {V}}&=\left\| \varvec{d_f}(\varvec{x})\right\| _{\mathcal {V}}=\sqrt{\langle \varvec{d_f}(\varvec{x}),\varvec{d_f}(\varvec{x})\rangle _{\mathcal {V}}}. \end{aligned}$$
(9b)

The inner product of the difference in the normal vectors of two functions describing two interfaces in (10b) results in a sensitive distance measure that also accounts for the orientation of the interfaces. The distance of the function values is encoded in the kernel function which expresses the statistical correlation between the functions. For the kernel we choose the radial basis function (RBF) in (10b) which takes the location vectors, respectively the function values \(\varvec{f}(\varvec{s})\) as arguments and returns their correlation. The expression is also known under the term surface currents [23] and was already successfully applied for inverse analysis in [6, 24]

$$\begin{aligned}&\varvec{d_n}(\varvec{s})=\varvec{n}_{\varvec{f}_1}(\varvec{s})-\varvec{n}_{\varvec{f}_2}(\varvec{s}) \end{aligned}$$
(10a)
$$\begin{aligned}&\begin{aligned}&\mathfrak {D}_{\mathcal {V},{\textrm{sc}}}=\sqrt{\langle \varvec{d_n},\varvec{d_n} \rangle _{\mathcal {V}}},\\&\text {with } k\left( \varvec{f}_{1},\varvec{f}_{2}\right) =\textrm{exp}\left( - \frac{\left\| \varvec{d_f}\right\| _{2}^2}{2\sigma _\textrm{W}^2}\right) \end{aligned} \end{aligned}$$
(10b)

We want to highlight that (9a) allows for many other definitions of \(\varvec{d}\) and \(\varvec{k}\) which can be used to emphasize special features of the investigated data. Further examples would be outside the scope of this article. But they might be interesting to emphasize other special features of the investigated data.

The discrete representation of (10b), e.g., in a finite element simulation, for surfaces or curves are expressed in form of respective elements and meshes. In the simplest case those are interface triangles, resulting from linear tetrahedral elements in 3D, or interface lines, resulting from linear elements in the two-dimensional case. The interface triangles or lines have normal vectors \(\varvec{n}_{e,\varvec{f}_1,i}\),\(\varvec{n}_{e,\varvec{f}_2,j}\) and element-surface centers \(\varvec{c}_{e,\varvec{f}_1,i}\) and \(\varvec{c}_{e,\varvec{f}_2,j}\). Normals and center points can be directly determined from the discretization. The difference vector of the two normal vectors is written as \({\varvec{d}_{\varvec{n},e}=\varvec{n}_{e,\varvec{f}_1,i}-\varvec{n}_{e,\varvec{f}_2,j}}\). The approximation for (10b) follows as a double sum over the elements from \(\varvec{f}_1\) and elements from \(\varvec{f}_2\)

$$\begin{aligned} \begin{aligned} \mathfrak {D}_{\mathcal {V}, {\textrm{sc}}}^2\approx&\langle \varvec{d}_{\varvec{n},e},\varvec{d}_{\varvec{n},e} \rangle _{\mathcal {V}} = \sum _{i=1} \sum _{j=1} \big [\varvec{n}_{e,\varvec{f}_1,i} \cdot k\left( \varvec{c}_{e,\varvec{f}_1,i},\varvec{c}_{e,\varvec{f}_2,j}\right) \varvec{n}_{e,\varvec{f}_2,j}\\&- 2 \varvec{n}_{e,\varvec{f}_1,i} \cdot k\left( \varvec{c}_{e,\varvec{f}_1,i},\varvec{c}_{e,\varvec{f}_2,j}\right) \varvec{n}_{e,\varvec{f}_2,j} + \varvec{n}_{e,\varvec{f}_1,i} \cdot k\left( \varvec{c}_{e,\varvec{f}_1,i},\varvec{c}_{e,\varvec{f}_2,j}\right) \varvec{n}_{e,\varvec{f}_2,j}\big ] \end{aligned} \end{aligned}$$
(11)

Using the mechanism of an RKHS for a definition of a surface distance measure has some advantages, that we want to emphasize. First, such a measure allows comparing the whole geometry rather than distances at selected points. It is further a more flexible approach with exchangeable kernel and therefore associated RKHS. It provides a solid mathematical foundation and still gives the analyst the flexibility of choosing an appropriate kernel that will emphasize geometrical features of choice. The kernel parameters that can be referred to as hyperparameters can be made subject to optimization in the calibration approach and therefore the RKHS approach allows seamless integration into a probabilistic framework.

Remark 3

(RKHS interpretation) The interpretation of a distance measure as an RKHS with respective norm allows a conclusive mathematical foundation. In fact also the Euclidean distance and the closest point projection distance can be interpreted as different RKHS. These have very specific kernels. Therefore, their re-interpretation is omitted. Generally, a multitude of RKHSs with respective kernels can be designed in an elegant mathematical manner to emphasize different characteristics of the image. One example could be to use a weighting depending on the distance to the closest Dirichlet boundary condition.

Remark 4

(Three-dimensional geometry) Although we will only present two-dimensional applications, the distance measures are equivalently applicable to three-dimensional use cases. As the evaluation of the similarity measures is the only step where geometry is evaluated, this generalization to 3D is obviously also true for the whole approach presented in this work. If accurate three-dimensional images are available from the experiment it would be preferable to use those to increase model accuracy and therefore the quality of the inverse analysis.

Bayesian calibration—realization and algorithmic aspects

After the mathematical basis for the inverse problem was presented above, we elaborate on the realization and algorithmic aspects as well as computational efficiency of the proposed approach. Several algorithms exist to find an approximation to the posterior distributions in (1) and (3b). Common strategies are particle methods that can be achieved through the Markov Chain Monte Carlo method (MCMC) [25], specifically by the well-known Metropolis-Hastings algorithm and its variants [26]. Further methods are based on Importance Sampling [27, 28] and especially the family of Sequential Monte Carlo (SMC) methods [29, 30]. Other, more recent strategies are based on Variational Inference [31, 32], where a parameterized distribution is optimized to match the true posterior as close as possible in a given norm. Here, we choose an SMC method as an established, state-of-the-art method.

Numerical approximation via sequential Monte Carlo (SMC) sampling

An efficient approach is needed to approximate the unnormalized posteriors of the Bayesian calibration problem (1), (3b) and their normalization. While a grid-based approximation of the posterior is feasible for low dimensional parameters \(\varvec{x}\), the necessary amount of grid-based evaluations grows exponentially with the dimension of \(\varvec{x}\) and SMC methods become significantly more efficient as they exploit regions of high density. Sequential Monte Carlo (SMC) methods are popular sampling methods to efficiently explore a probability distribution \(\pi \) for which no closed expression exists.

The result of the SMC sampler is a particle representation \(\delta _{{\varvec{x}}^{\left( i\right) }}\) of \(\pi \) with associated weights \(w^{\left( i\right) }\) in the form of

$$\begin{aligned} \pi \left( {\varvec{x}}\right) \approx \sum _{i=1}^Nw^{\left( i\right) }\delta _{{\varvec{x}}^{\left( i\right) }}\left( {\varvec{x}}\right) . \end{aligned}$$
(12)

Therein every particle has a location and weight \(\left\{ {\varvec{x}}^{\left( i\right) }, w^{\left( i\right) }\right\} _{i=1}^N\). It is further known [33, 34] that with this approximation any integral over an integrable function \(h\left( {\varvec{x}}^{\left( i\right) }\right) \) can be approximated by a sum that converges to the integral

$$\begin{aligned} \sum _{i=1}^Nw^{\left( i\right) }h\left( {\varvec{x}}^{\left( i\right) }\right) \rightarrow \int h\left( {\varvec{x}}\right) \pi \left( {\varvec{x}}\right) \textrm{d}{\varvec{x}} \text {almost surely.} \end{aligned}$$
(13)

Specifically, we use an SMC method [35] which sequentially blends over from a particle representation of a predefined prior distribution to a particle representation of the actual posterior distribution. A convenient aspect of the particle representation in (12) is that they allow for a straightforward approximation of integrals (13), as they appear in marginal distributions (2). This is especially useful for Bayesian calibration under uncertainty and the associated integration in (3b). Here, the numerical integration is conducted by simply ignoring the dependency on \(\varvec{\theta }\) in the particle representation.

The SMC algorithm we use in this work uses an adaptive step length based on [33, 34, 36]. Herein, a control parameter \(\zeta \) for the effective sample size (ESS) is used to control step length. The ESS is defined as

$$\begin{aligned} \textrm{ESS}= \frac{1}{\sum _{i=1}^N\left( w^{\left( i\right) }\right) ^2} \end{aligned}$$
(14)

and is a measure for the current weight distribution. The ESS represents how well the probability density mass is distributed amongst the used particles.

The presentation of the SMC algorithm is compactly demonstrated in the pseudo-algorithm 1 and described in remark 5. For more details, the interested reader is referred to [29, 30, 34].

figure a

Remark 5

(Description of SMC algorithm) An SMC algorithm generally starts with the steps to draw an initial set of particles from the prior distribution and initially equally weigh them. Then it needs to iteratively find a suitable step size and reweigh the particles. Resampling becomes necessary if too few particles carry too much weight of the distribution and likewise many particles lose significance. The particles are sequentially rejuvenated according to the new control parameter \(\gamma \) and therefore new intermediate distribution (15). The rejuvenation step is carried out with a scaled covariance of the current step as demonstrated in [37] based on the acceptance rate of the Metropolis-Hastings algorithm used for rejuvenation. The Metropolis-Hastings algorithm is a Markov Chain Monte Carlo (MCMC) sampler [38], of which we omit the presentation for the sake of compactness. It is used to move the particles according to the new intermediate distribution. Reweighting, rejuvenation and possibly resampling are iteratively repeated until the intermediate distribution blends into the posterior, i.e. \(\gamma =1\), which is the necessary last step if no other suitable \(\gamma \) can be found. For the sake of a proper probability density, i.e. integration to 1, the resulting weights are finally normalized.

The so-called tempering strategy that defines the transition from the prior to the posterior as a sequence is written as

$$\begin{aligned} \pi _{s,\gamma }\left( {\varvec{x}}\right) = \textrm{exp}\left( \gamma \mathfrak {L}\left( {\varvec{x}}\right) \right) p\left( {\varvec{x}}\right) \end{aligned}$$
(15)

with the logarithmic likelihood \(\mathfrak {L}\left( {\varvec{x}}\right) \) as introduced in (5) and the prior \(p\left( {\varvec{x}}\right) \). Finally, after reaching \(\gamma =1\) the particle approximation has blended over into one for the posterior distribution.

Approximation of the log-likelihood via Gaussian process regression

To efficiently evaluate the likelihood function in (15) we use a Gaussian process (GP) regression model [39] for the log-likelihood function (5) that can be trained on a controllable amount of forward model evaluations. The main idea is that for a given computational budget, represented by the number of forward model runs, the approximation error of the surrogate is considerably smaller than the error in the SMC posterior approximation for the same number of forward simulation runs. A similar approach, where the model output is directly used as data to train a Gaussian process regression model as a surrogate for Bayesian calibration was used in [1]. Gaussian processes are often used to generate regression models and therefore a brief summary of the core equations is presented in the appendix. The presented overall approach would also work with different regression approaches as linear or spline interpolation or in case of cheap enough forward models would also work directly on the forward model.

We use the log-likelihood \(\mathfrak {L}\left( {\varvec{x}}\right) \) of the Bayesian calibration problem as presented in (5) to train our GP upon. This gives the advantage that the regression models do not need to fulfill positiveness constraints. This would be the case if the GP was trained on the unnormalized posterior or likelihood directly. Another advantage of learning the log-likelihood function in contrast to learning the simulation response is that the log-likelihood is a scalar function.

Training data \(\mathcal {D}\) are input–output pairs of values that are known about the process and the regression model can be based on. In our case, we can evaluate the forward model with any choice of model parameters \({\varvec{x}}\), compare it to the observed experiment and determine the logarithm of the log-likelihood \(\varvec{\mathfrak {L}}\) (5). To simplify notation we specify the training data outputs to a vector of log-likelihoods \(\varvec{\mathfrak {L}}_\textrm{train}\) at different coordinates \({\varvec{x}_\textrm{train}}\). Thus, we are free to choose any input batch \({\varvec{x}_\textrm{train}}\) and compute resulting output \(\varvec{\mathfrak {L}}_\textrm{train}\) according to our computational budget. The generation of the training data \(\mathcal {D}\) for the Gaussian process regression causes the highest computational costs in the inverse analysis, as for every training tuple \(\{\mathfrak {L}_{\text {train},i},\varvec{x}_{\text {train},i}\}\) one forward model evaluation is required, due to the dependency of the log-likelihood (5) on the forward model.

Some further advantages of the surrogate approach are an a priori controllable amount of simulation runs, which can also be conducted in parallel, in contrast to the batch-sequential model evaluations that the SMC would impose on the forward model evaluations in a direct application. Once the surrogate is generated, the SMC algorithm can run without the risk of encountering non-converging or failing forward simulations.

To generate the training data \(\mathcal {D}\) we choose a space-filling design strategy, namely a quasi-random Sobol sequence [40, 41]. The Sobol sequence has the advantage to have superior uniformity properties [42] and additionally allows for generating further sequential points that still share the space filling properties.

Remark 6

(Dimensions in GP approach) The number of necessary training tuples for desired surrogate accuracy is dependent on the complexity of the function, the type of (space-filling) experimental design and the dimension of the function. The required amount of training data grows exponentially with the dimension of the problem (curse of dimensionality), so that for higher dimensions (a rough guideline might be \(\dim (\varvec{x})>15\)) direct sampling (using more advanced strategies that can incorporate gradient information of the model w.r.t. the inputs \(\varvec{x}\)) becomes more efficient.

Implementation aspects

The implementation of the algorithm described above was done in the Python software framework QUEENS [43]. Herein, the Gaussian process regression module GPy [44] was used for the construction of the Gaussian process surrogates. The sequential Monte Carlo algorithm based on algorithm 1 is also implemented in QUEENS. For processing of forward model results the Python package vtk for “The Visualization Toolkit” was used [45]. Some of the following visualizations were generated with Seaborn [46] and Matplotlib [47]. Parallel axis plots are generated with plotly [48]. We furthermore use PyTorch [49] for the generation of the Sobol sequences. The forward models were solved with the in-house C++ research code BACI [50].

Numerical examples

In this section, we present our Bayesian calibration approach for a coupled fluid–structure interaction problem. We demonstrate the approach with generated data to better assess its individual steps. Our focus lies on the presentation of the proposed approach and some general characteristics in the resulting posteriors. In application with real-world data, the procedure can be used without any changes. The computational mechanics models in the examples are schematic models for fluid-biofilm interaction that is the motivation for our research. Therein the fluid–solid interface deforms as a consequence of the interaction. A further description of the experiments is moved to the appendix as we want to focus on the model here. In the following examples, we calibrate biofilm material properties under partially uncertain experimental conditions. For the numerical demonstrations we use the fluid-solid interaction (FSI) between incompressible Navier–Stokes flow and a hyperelastic nonlinear solid material model which is briefly introduced in the appendix. Although the presented calibration approach is equivalently applicable for single field problems with deformable boundaries we take the challenge of a coupled multi-physics model of FSI because we want to advertise the benefit of the approach in such applications. A variety of different models for biofilms are available and also further effects can be included (see, e.g., [12, 51]), which would just lead to different forward models.

Problem setup

The calibration is performed for hyperelastic material properties of the solid domain for which we will calibrate the two parameters of a Saint-Venant Kirchhoff material model. For the given setup of FSI models for biofilms and the biofilm flow cell data, the location of the fluid-biofilm interface is the primary data available and therefore used for comparison. A schematic sketch of the problem setup is drawn in Fig. 2. For easier demonstration we first investigate a two-dimensional calibration problem (\(\dim (\varvec{x})=2\)) without experimental uncertainties (\(\dim (\varvec{\theta })=0\)) and then move on to more complex examples.

Fig. 2
figure 2

Schematic of problem setup with domain and interface names. A biofilm (green domain) is grown on the flow cell floor and is interacting with a flowing liquid (blue domain) that is providing nutrients to and also deforming the biofilm which in return is changing the flow field. Undeformed (gray dotted line) and deformed biofilm interface. Inflow (teal) and outflow (yellow) boundaries of the flow cell model are also shown

We chose the same problem setup as presented in [12]. The biofilm geometry is inspired by analyses done on experimental results in [11, 14]. The model domain represents a two-dimensional channel with dimensions \(1\,\textrm{mm}\times 2\,\textrm{mm}\), where horizontal fluid flow with parabolic profile is enforced from the left boundary (see \(\Gamma ^{\textrm{F}}_\textrm{in}\) in Fig. 2) with a maximal volume rate of \( \dot{V}_{\textrm{in}}= 100\,{\mathrm {mm^2}/\textrm{s}} \). The solid biofilm (green) is attached to the channel floor. A no-slip condition for the fluid is used on the channel floor and top boundary as well as the biofilm on the channel floor (see \(\Gamma ^{\textrm{F}}_\textrm{D}\) and \(\Gamma ^{\textrm{S}}_\textrm{D}\) in Fig. 2) and a horizontal outflow is enforced on the right edge (see \(\Gamma ^{\textrm{F}}_\textrm{out}\) in Fig. 2). These boundary conditions are modeled via Dirichlet boundary conditions on the fluid velocity (and solid displacement accordingly) and respective free outflow in horizontal direction. The horizontal condition represents the ongoing empty channel left and right from the modeled domain.

To generate the artificial experimental data, we use a forward simulation of the fluid-biofilm interaction with a Saint-Venant-Kirchhoff material model for the solid biofilm domain. The material is characterized by two parameters, namely Young’s modulus \(E\) and Poisson’s ratio \(\nu \). We summarize the chosen input parameters, used for the generation of the data, in form of the ground truth vector \({\varvec{x}}_{\textrm{gt}}=\begin{bmatrix}\nu = 0.3, E= 400\,\textrm{Pa}\end{bmatrix}^{\textsf{T}}\). The fluid has a dynamic viscosity of \(\mu ^{{\textrm{F}}}= 10^{-3}\,\mathrm {Pa\, s} \) and a density of \( \rho ^{{\textrm{F}}}= 10^3\,{\textrm{kg}/\mathrm {m^3}} \) as a model for water. The biofilm has the same density as the fluid. The solution of the velocity and pressure field of the fluid and displacement field of the biofilm is depicted in Fig. 3 for the regarded quasi-steady deformed state in the ground truth forward model evaluation. As a reaction to the load imposed by the fluid inflow boundary condition, the solid bends towards the right as it is plotted in Fig. 4. In Fig. 4a we see the artificial observation data \(Y_{\textrm{obs},C}\) as the result of the reference simulation with ground truth values \({\varvec{x}}_{\textrm{gt}}\). \(Y_{\textrm{obs},C}\) represents the deformed location (green) of the interface in this example for one single point in time \(C\). In Fig. 4b we plot the results for some exemplary parameter combinations additionally.

Fig. 3
figure 3

Field solutions of the ground truth simulation with parameters \({\varvec{x}}_{\textrm{gt}}\) on the deformed geometry. a Fluid velocity magnitude, solid displacement magnitude. b Fluid pressure solution

Fig. 4
figure 4

Exemplary forward model results for deformed interface shapes for different input parameter samples compared to the result with ground truth \({\varvec{x}}_{\textrm{gt}}=[\nu = 0.3,\;E= 400\,\textrm{Pa}]^{\textsf{T}}\) as input and the undeformed state

Likelihood response surface for different discrepancy measures

In a first step, we compare the effect of the different discrepancy measures as introduced. We directly approximate the log-likelihood by the posterior mean function of a Gaussian process surrogate and use the discussed discrepancy measures. Additionally, we also provide the posterior standard deviation of the GP to quantify the remaining uncertainty in the surrogate. All surrogate models for the log-likelihood function resulting for the different discrepancy measures use the same training input that was sampled with a quasi-random Sobol sequence to yield a space-filling training design. The training inputs are different parameters in the forward model and the resulting forward simulations.

For an estimation of a suitable \( \sigma _{\textrm{N}}\) in the likelihood model (4) the known resolution \(\approx 8 \,\mathrm {\mu m} \) of OCT [14, 15] is considered. The noise standard deviation is assumed to be in the same order of magnitude, such that \(\sigma _{\textrm{N}}= 0.01\,\textrm{mm}\) is used in the following. OCT resolution and standard deviation in the likelihood model are not expected to be equal. A further discussion is omitted here, as the demonstration here works independent of the choice and an expressive value can only be found related to real data and chosen image segmentation. For the RKHS norm we need two parameters \(\sigma _{\textrm{N}}\) and \(\sigma _\textrm{W}\). An estimation of the length scale \(\sigma _\textrm{W}\) for the RBF kernel in the RKHS approach in (10b) is made as \(\sigma _\textrm{W}= 0.005\,\textrm{mm}\), being approximately \(10\%\) of the maximal displacement magnitude (see Fig. 3(a)).

The resulting regression models for the likelihoods are shown in Fig. 5 for \(n_\textrm{train}= 1000\) training points with a Matérn 3/2 kernel (see appendix on GP, Eq. (27) for details). In general, the likelihood over the parameters can be understood as a score value of how well the forward model response with respective parameters leads to similar results compared to the reference data. A discussion and interpretation of the figure will follow below. For this first comparison, we just use this large number of training points and postpone the discussion on the GP convergence over \(n_\textrm{train}\) to the case only using the RKHS norm measure. For the distribution of the measurement points in this comparison, the reader is referred to [12]. In Fig. 5 the fields of the likelihoods are determined from the logarithmic likelihoods and those fields are normalized in the plots for better comparability. Parameter combinations that led to failed forward model evaluations are marked by gray crosses in the following figures. For our setup, the failing simulations occurred for low values of Young’s modulus (located on the left side of the plots) representing soft biofilm material, which lead to large mesh distortions in the ALE FSI approach. For the sake of comparability, the respective logarithmic likelihoods are plotted in Fig. 6 and the associated standard deviations in the regression models in Fig. 7.

Fig. 5
figure 5

Likelihoods for different surface measure distances with \(n_\textrm{train}= 1000\). a Euclidean distance at measurement points, b closest point projection distances, c RKHS norm. Normalized for better comparability. Training samples as circles for successful and gray crosses (on the left side) for failed forward model evaluations

Fig. 6
figure 6

Logarithmic likelihood regression model for different surface measure distances with \(n_\textrm{train}= 1000\). a Euclidean distance at measurement points, b closest point projection distances, c RKHS norm. Training samples as circles for successful and gray crosses for failed forward model evaluations

Fig. 7
figure 7

Standard deviation from logarithmic likelihood regression model variance for different surface measure distances with \(n_\textrm{train}= 1000\). a Euclidean distance at measurement points, b closest point projection distances, c RKHS norm. Training samples as circles for successful and gray crosses for failed forward model evaluations

It can be seen, that the response in the likelihoods show a high likelihood of the parameters in red for a characteristic curved shape with its peak at the expected ground truth parameters \({\varvec{x}}_{\textrm{gt}}=\begin{bmatrix}\nu = 0.3,&E= 400\,\textrm{Pa}\end{bmatrix}^{\textsf{T}}\). A high likelihood corresponds to a high probability density of the parameter combination to represent the reference data. This means that the corresponding values for \({\varvec{x}}\) in this region of the input space result in simulation outputs that are very close to the observation data under the employed discrepancy measure. The likelihood falls very close to zero when moving away from the high likelihood regions, representing low similarity of the forward model result compared to the reference data. Especially the failed simulations (gray crosses) fall in a region with very low likelihood values which renders them irrelevant for our investigations. For the rest of the input space, it can be seen in Fig. 7 that the standard deviation in the regression model is low and only rises in regions with little data.

Comparing the Euclidean distance measure in Fig. 5a and the closest point projection in Fig. 5b it can be seen, that the second one is more peaked around the maximum of the likelihood that is close to the ground truth value \({\varvec{x}}_{\textrm{gt}}\). This is also a consequence of the formulation (4) and (5), as the numbers of distance measurements differ greatly with the number of measurement points \({n_\mathrm {\textrm{mp}}}=10 \) and the number of interface nodes \({n_\textrm{in}}=66\). The higher number of individual single point measurements generally leads to a more peaked likelihood with the same \(\sigma _{\textrm{N}}\). It must also be stated that no further weighting of the closest point projection distances was done, so all nodal closest point projection distances are considered equally important. This includes the distances for many mesh nodes, that are close to the boundary conditions and therefore there the displacement magnitude is lower. An additional data compression approach, e.g., kernel principal component analysis [21, 52] or also a selection of only a subset of the interface nodes could be used to increase comparability, but this is outside of the scope of the current article. The likelihood from the RKHS based distance measure (in Fig. 5c) combines both features to be expressive around the maximum likelihood (ML) point and still have information from more distant points. The RKHS norm measure correlates all discretized interface locations and orientations of the model to all counterparts in the observation (see (11)), therefore it is also expected to be the most detailed measure for this comparison.

The likelihood based on the forward model evaluations adequately combined with prior distributions results in the posterior (see (1)), which is a probability density of the parameters leading to the observations. It is favorable to have an expressive posterior and therefore expressive likelihood as a result to be able to compare posterior values for different parameter combinations and with that develop an understanding of the forward model in relation to the observations. In the case of a very flat likelihood the conclusion is that all parameter combinations are similarly well suited to explain the observations and therefore they are potentially insignificant to the forward model, at least in the range of the regarded parameter intervals. This means that all input parameters \({\varvec{x}}\) will lead to forward model results that are very close to the observation under the employed discrepancy measure. The expressive shape of the likelihood based on the RKHS based measure underlines that this measure is well suitable in the given example to be used for Bayesian calibration. Therefore it is used in all following examples.

The presented likelihoods were generated using the different discrepancy measures (6), (7) and (11) and all yield similar characteristics in the shape of the likelihood. Therefore it can be concluded that in this setting and a fluid-biofilm interface-based measurement of a flow cell experiment an underestimation of Young’s modulus \(E\) is coupled to an underestimation of Poisson’s ratio \(\nu \). In applications with real experimental data, it would make sense to restrict the training points to the intervals, that are believed to contain the optimum or at least relevant values for the parameters. That could mean to restrict the Poisson’s ratio to positive values, as this is what would be expected for most materials. Nevertheless, the resulting shapes representing high likelihood are very smooth for the whole tested range between \(\nu = \)-0.8 and \(\nu = 0.5\) and do not show a distinct border between positive and negative values. This is interesting with regard to biofilm mechanics in flow cell experiments as it shows that the estimation of \(E\) and \(\nu \) is coupled for all surface comparisons that were tested. For some investigations, this also hints toward the need to incorporate additional measurements or information, e.g., prior knowledge.

Convergence over number of training points

The most costly part of the presented algorithm are the forward model evaluations that are necessary to generate the training data \(\mathcal {D}\) of the GP log-likelihood regression model. While the convergence of the GP over the number of training data points is dependent on the character of the underlying function as well as on the selected design of experiments, we want to show a short qualitative convergence study for the problem at hand for the distance measure based on the RKHS inner product. In general the convergence study of a GP w.r.t. the number of training points is difficult as usually it is not feasible to increase the data point set size by orders of magnitude. Therefore other strategies, e.g., leave-one-out cross validation can be used as a proxy for the regression error [39]. The requirement towards the number of training points and the regression model is to catch the relevant shape characteristics of the likelihood.

For the application of such an approach, it is crucial to know how many forward model evaluations are required to get results efficiently. To get a first picture the Gaussian process regression model for the logarithmic likelihood is created for a series of training point set sizes \(n_\textrm{train}\). For this qualitative study, only the RKHS norm based likelihood was used as it uses the most detailed comparison and seems to give the most expressive shape of the posterior. The likelihood parameters were set to \(\sigma _\textrm{W}=0.005 \,\mathrm {\mu m}\), \(\sigma _{\textrm{N}}^2=0.0005\,\textrm{mm}^2\) in all following examples. The choice \(\sigma _{\textrm{N}}^2=0.0005\,\textrm{mm}^2 \) in our case relates to the discretization size, as the rounded squared average interface element length is \(\bar{l}^2 \approx 0.0005 \,\textrm{mm}^2 \). It is difficult to interpret \(\sigma _{\textrm{N}}\) in relation to the measurement error alone, as especially also the image segmentation approach chosen to determine the model fluid–biofilm interface from experimental data influences the choice. We deviated from this parameter choice for \(\sigma _{\textrm{N}}\) in the comparison of the measures to keep it the same for all measures.

We used a Matérn 2/3 kernel (see appendix on GP, Eq. (27)) with one length scale hyperparameter l (in the standardized input space) and one signal variance \(\sigma _0^2\) which will be determined via ML estimation using an L-BFGS-B optimizer for the evidence of the GP (see Eq. (29)). The Matèrn 2/3 kernel appeared to be the best suitable to represent the likelihood field as it resulted for the given example. Especially the great variety in steepness and therefore low smoothness of the likelihood seems to be the challenge for the regression approach. However, at the moment, no general recommendation for the GP kernel can be given. As commonly done [39] we used a fixed nugget noise \( \sigma _\textrm{n}^2=10^{-5}\cdot \left( \textrm{max}\left( \varvec{\mathfrak {L}}_\textrm{train}\right) -\textrm{min}\left( \varvec{\mathfrak {L}}_\textrm{train}\right) \right) \) to stabilize the training of the GP.

In the following, the convergence of the likelihood surrogate model over the number of used training points is qualitatively shown for the problem at hand. For this study, the Sobol sequence sampling property is used as all samples are consecutive samples from the same Sobol sequence. In Fig. 8 it is apparent, that only very few training points are necessary to estimate the general shape of the likelihood distribution and have a rough estimate for the optimum. With \(n_\textrm{train}=200\) samples the ML estimate is already in good agreement with the ground truth \({\varvec{x}}_{\textrm{gt}}=\begin{bmatrix}\nu = 0.3,&E= 400\,\textrm{Pa}\end{bmatrix}^{\textsf{T}}\).

Fig. 8
figure 8

Convergence of the Gaussian process regression model for the likelihood function over number of training points \(n_\textrm{train}\) (forward model evaluations). a \(n_\textrm{train}=10\), b \(n_\textrm{train}=20\), c \(n_\textrm{train}=50\), d \(n_\textrm{train}=100\), e \(n_\textrm{train}=200\), f \(n_\textrm{train}=900\). In all cases the same Sobol sequence was used to generate a space-filling training data set. Training samples as circles for successful and gray crosses (left side) for failed forward model evaluations

It can be concluded that for this two-dimensional example the likelihood surrogate shows relevant features and the global shape well for \(n_\textrm{train}= 200 \) forward model evaluations and no significant gain in accuracy can be expected for a moderate increase of the sample size. Given that in the following examples a comparison with different priors is made and another problem dimension in form of an uncertain parameter is added, and therefore the complexity is increased, we use the Gaussian process model with one length scale for all (standardized) parameters, a fixed nugget noise and \(n_\textrm{train}=1000\) training points for the following examples (see Remark 6 on dimensionality).

Remark 7

(Distribution of training points) With the applied approach of a grid-based quasi-random distribution of the samples, there is no compromise between exploration and exploitation, but the emphasis is put on exploration. It is possible to use available prior information for the generation of the samples and therefore have higher density of samples in high prior areas, or use an iterative approach and refine the samples in high posterior regions.

Calibration of constitutive parameters in biofilm models

In this subsection, the regression model will be generated according to the findings in the previous examples. The GP was constructed with a Matérn 2/3 kernel, a single length scale and signal variance for all parameters and a fixed nugget noise variance \(\sigma _\textrm{n}^2\) in (29). 1000 samples were used for the training of the GP. As the likelihood model is available in form of a cheap to evaluate surrogate, we use 5000 SMC particles and 20 rejuvenation steps per SMC iteration (which would result in 1 million likelihood calls for 10 SMC iterations). For the adaptive step size of the SMC iterator a control parameter of \(\zeta = 0.995\) was used (see Algorithm 1).

In the following examples, we examine the influence of priors in the parameters and uncertainties on the resulting posteriors. In Fig. 9 all results discussed in this section are plotted on the same page for better comparability. The different cases will be described and discussed subsequently in the following paragraphs. Figure 9 displays the resulting posteriors of the examples in form of SMC particle approximations. To visualize the character of the data, one particle distribution is shown explicitly in Fig. 9b, where the particles are plotted at their respective coordinates. The particle weights are illustrated as circle size and in a color scale. The other three subplots (Fig. 9a, c, d) show two-dimensional hexagonal histograms in a color scale. The particles are sorted into the displayed bins and summed up using their weights. The percentiles of the two-dimensional posteriors are additionally simplified as kernel density estimates (KDE) that are shown as black solid lines. For the KDE a radial basis function (RBF) kernel with bandwidth optimization was used.

On the top and right sides, the marginal distributions depending on both individual input parameters are plotted as histograms. The one-dimensional marginal posterior distributions \(p\left( E|Y_{\textrm{obs},C}\right) \) and \(p\left( \nu |Y_{\textrm{obs},C}\right) \) can easily be approximated by sorting the weighted particles into bins in the respective dimension. The marginals are additionally displayed in form of KDEs as black solid lines. The used priors are indicated as red dashed lines in all following plots.

Characteristic points deduced from the particle approximation are marked as crosses. The maximum a posteriori (MAP) estimate \({\varvec{x}}_{\textrm{MAP}}\) is marked in green and the posterior mean (PM) \({\varvec{x}}_{\textrm{PM}}\) in orange.

In high dimensions, the posterior cannot be plotted as easily as in the two-dimensional case. The analyst wants to have a comparative overview of where the mean value of the posterior is and how the probability mass is distributed in the posterior. For that and as another common approximation, we also show percentile lines of the global Gaussian approximation to the posterior in orange. The global Gaussian approximation is parameterized by the posterior mean (PM) vector \({\varvec{x}}_{\textrm{PM}}\) and the covariance matrix which can both be calculated from the weighted SMC particles very quickly.

We also present the Laplace approximation [21] of the posterior distribution around the MAP estimate depicted in Fig. 9a and c with green solid lines for the two cases without uncertainty. The Laplace approximation can be understood as a local quadratic approximation of the posterior density function around its MAP in the log space in form of a Gaussian distribution. The covariance matrix of the resulting local Gaussian distribution gives an idea of how fast the posterior changes in a certain direction, starting from the MAP estimate. Please note, that the posterior distribution is not known in closed form but only approximated by SMC particles with associated weight. We approximate the MAP estimate by the SMC particle that scored the highest posterior value in the last rejuvenation step of the last iteration. In these examples, the necessary gradients of the posterior distribution were approximated by finite differences w.r.t. the input variables \({\varvec{x}}\) on the GP regression model. The Laplace approximation was omitted for the case with uncertain inflow. Laplace approximations are used for approximative Bayesian inverse analysis methods [6], where the MAP is first found through optimization and then the Lagrange approximation is computed for an estimate of the variance.

Fig. 9
figure 9

Representation of the posterior distribution \(p\left( E,\nu |Y_{\textrm{obs},C}\right) \) in histograms. Two-dimensional hexagon histograms for combined posterior and according marginals \(p\left( E|Y_{\textrm{obs},C}\right) \) and \(p\left( \nu |Y_{\textrm{obs},C}\right) \) attached to top and sides. Percentiles of kernel density estimates depicted as black solid lines, global Gaussian approximations as orange solid lines around the posterior mean (\({\varvec{x}}_{\textrm{PM}}\) as orange cross) and the Laplace approximations as a green solid lines around the MAP estimate (\({\varvec{x}}_{\textrm{MAP}}\) as green cross). a Posterior with uniform prior and b resulting particle and weight distribution from SMC. c Posterior with informed priors. d Posterior with uniform prior and uncertain inflow

Influence of prior assumptions on the posterior distribution

To show the influence of quantifiable prior knowledge on the posterior, we discuss the same example with uniform prior and a combined beta-distribution and log-normal distribution prior on the model input \({\varvec{x}}\).

Uniform prior In this first demonstration we choose an uninformative uniform prior distribution for Young’s modulus \(p\left( E\right) =\mathcal {U}\left( E|100\,\textrm{Pa},800\,\textrm{Pa}\right) \) and for the Poisson ratio \(p\left( \nu \right) =\mathcal {U}\left( \nu |-0.8,0.5\right) \). Those are independent priors on the parameters and can be combined to \(p\left( E, \nu \right) = p\left( E\right) p\left( \nu \right) \).

Figure 9a displays the approximation of the resulting posterior in form of a hexagonal histogram plot generated with the weighted particles from the SMC run. In Fig. 9b the resulting particle distribution of the SMC along with the colored-coded particle weight is shown.

The posterior in Fig. 9a shows an almost linear band shape of high densities. This shows that it is more crucial to have a good value for Young’s modulus \(E\) than one for Poisson’s ratio \(\nu \) to obtain good similarity between model output and reference data for the given parameter ranges. The posterior mean (PM) vector computes to \({\varvec{x}}_{\textrm{PM}}=[E\approx 431\,\textrm{Pa},\; \nu \approx -0.0071]^{\textsf{T}}\). In Fig. 9a the Gaussian approximation has the same orientation as the particle approximation to the posterior. Still it can not represent the posterior complexity. The maximum a posteriori (MAP) is approximated to \({\varvec{x}}_{\textrm{MAP}}= [E\approx 388\,\textrm{Pa},\; \nu \approx 0.266]^{\textsf{T}}\). This \({\varvec{x}}_{\textrm{MAP}}\) is close to the ground truth in relation to the number of training points and the resulting resolution of training points in the input space.

Interestingly, the Laplace approximation, represented by its percentile contour lines in Fig. 9a in green, is almost oriented orthogonal to the actual posterior (solid black line). This means, that the Laplace approximation gives a misleading local approximation in this specific case. That can occur if the posterior has a more complex curvature around the MAP. In our example, this might be partially induced by the GP approximation, as well. It seems that the Laplace approximation cannot live up to the complexity of the given posterior and therefore simplified methods based on the Laplace approximation would work poorly in presented examples without the analyst knowing. This is why we advocate the fully Bayesian treatment for this kind of problems.

As mentioned before, we also plot the marginal posterior distributions of \(p\left( E|Y_{\textrm{obs},C}\right) \) and \(p\left( \nu |Y_{\textrm{obs},C}\right) \), respectively. The marginals show that \(p\left( E|Y_{\textrm{obs},C}\right) \) has a single, stable, global optimum and \(p\left( \nu |Y_{\textrm{obs},C}\right) \) forms a plateau of high densities. Due to the strong coupling of \(E\) and \(\nu \) and the complex shape of the joint posterior \(p\left( E,\nu |Y_{\textrm{obs},C}\right) \), the marginal posterior distributions alone however are not informative for the coupling effects in the global posterior distribution.

Informed prior Physical insight can be incorporated in prior assumptions and can have a great influence on the posterior distribution and should be integrated in the analysis. Besides the uninformative uniform prior that we used in the first example, we now also want to demonstrate the effect of an informed prior. For the Young’s modulus, a log-normal prior is assumed with a mode of \(E= 300 \,\textrm{Pa}\). The log-normal can be parameterized with \( \mathcal{L}\mathcal{N}\left( E|\mu _{{\mathcal {L}}{\mathcal {N}}}\approx 5.86,\sigma _{{\mathcal {L}}{\mathcal {N}}}= 0.4\right) \) with parameters \(\mu _{{\mathcal {L}}{\mathcal {N}}}, \sigma _{{\mathcal {L}}{\mathcal {N}}}\) that are not standard deviation and mean of the distribution, but \(\log {(E)}\sim \mathcal {N}\left( \mu _{{\mathcal {L}}{\mathcal {N}}},{\sigma _{{\mathcal {L}}{\mathcal {N}}}^2}\right) \). This accounts for the fact that the Young’s modulus must have positive values and it is very unlikely that it is close to zero but more probable to find values higher than the mode. For the Poisson’s ratio a beta-distribution \(\mathcal {B}\left( \nu |a=43/13,b=22/13\right) \) between \(-0.8\) and 0.5 is used as prior. This distribution has its mode at \(\nu = 0.2\) and accounts for the belief that the Poisson’s ratio is more probable to have positive values and is strongly bounded between \(-1.0\) and 0.5 with a decreasing probability towards the boundaries of this interval.

The modes of the presented priors are intentionally chosen to deviate from the ground truth \({\varvec{x}}_{\textrm{gt}}\) to show the effect of informative priors. The priors are plotted in Fig. 9c alongside the respective marginal posterior distributions. It can be easily observed that compared to the uniform priors used in Fig. 9a, the informed prior assumptions, i.e., distributions that weight specific areas in the input space higher than others, lead to a posterior distribution which is more pronounced around the ground truth \({\varvec{x}}_{\textrm{gt}}\) and has less probability mass in regions with low prior density. As the majority of the probability mass of the resulting posterior is now in a more compact area of the design space, also the marginal posterior distributions \(p\left( E|Y_{\textrm{obs},C}\right) \) and \(p\left( \nu |Y_{\textrm{obs},C}\right) \) show a more defined shape with one predominant mode.

Similar to Fig. 9a, the joint posterior \(p\left( E,\nu |Y_{\textrm{obs},C}\right) \) has more variance in the \(\nu \)-dimension, rendering \(\nu \) a sloppy parameter. This becomes also apparent as the marginal posterior distribution \(p\left( \nu |Y_{\textrm{obs},C}\right) \) almost coincides with the prior assumption \(p\left( \nu \right) \), indicating a very small influence of this parameter on the likelihood function. This also means that for this type of measurement the exact value of model parameter \(\nu \) has less importance for the agreement of mechanical model and observed experiment. In \(p\left( E|Y_{\textrm{obs},C}\right) \) on the other hand the density does not have the same mode as the prior \(p\left( E\right) \). This underlines that the likelihood contains characteristic information about this parameter.

The MAP estimate has the values \({\varvec{x}}_{\textrm{MAP}}= [E\approx 361\,\textrm{Pa},\; \nu \approx 0.195]^{\textsf{T}}\) in this case. This is slightly lower in both parameter values than in the run with uniform prior. With the prior modes deviating from the ground truth, a deviation of \({\varvec{x}}_{\textrm{MAP}}\) from the ground truth towards lower values is expected. Furthermore, we see that the global Gaussian approximation of the posterior around \({\varvec{x}}_{\textrm{PM}}= [E\approx 377\,\textrm{Pa},\; \nu \approx 0.08]^{\textsf{T}}\) moves closer to the mode of the posterior distribution, as the low likelihood areas are further weighted with low prior values and therefore lose probability mass compared to the uniform priors. The Laplace approximation is still not a very good local approximation of the posterior.

Material calibration under uncertain boundary conditions

Now we also demonstrate the treatment of additional uncertain influences on the system, denoted as \(\theta \) above. We use the example of an uncertain inflow volume rate in the flow cell experiment. As generally the biggest biofilm patches in the channel are analyzed, they take up a significant portion of the cross-section of the channel and force the flow to go around it. Further, only the middle of the channel can be scanned to high quality using OCT. Hence, the distribution of the volume flow rate between outlying parts of the cross-section and the analyzed patch is subject to uncertainties. Overall these considerations are summed into the assumption of an uncertain inflow rate distribution, that has its main mode significantly lower than the average inflow rate and a spread distribution around that, with low density for higher flow rates. This is modeled with an assumption of \(p\left( \dot{V}_{\textrm{in}}\right) \) as a beta-distribution \(\mathcal {B}\left( \dot{V}_{\textrm{in}}|a=2.6,b=1.4\right) \) between \(0 \,{\mathrm {mm^2}/\textrm{s}} \) and \( 110 \,{\mathrm {mm^2}/\textrm{s}} \) with mode at \(88 \,{\mathrm {mm^2}/\textrm{s}}\) which is plotted in Fig. 10. The value of the volume inflow rate used for data generation is kept the same as in all other examples \(\dot{V}_{\textrm{in}}= 100\,{\mathrm {mm^2}/\textrm{s}} \). The distribution accounts for the belief that only less than average of the fluid volume rate flows over the biofilm as compared to the rest of the channel. For the two material parameters uniform priors \(p\left( E\right) =\mathcal {U}\left( E|100\,\textrm{Pa},800\,\textrm{Pa}\right) \) and \(p\left( \nu \right) =\mathcal {U}\left( \nu |-0.8,0.5\right) \) were used.

Fig. 10
figure 10

Assumed beta-distribution \(p\left( \dot{V}_{\textrm{in}}\right) \) of assumed uncertain inflow volume rate \(\dot{V}_{\textrm{in}}\)

The resulting joint posterior of the two analyzed parameters under the influence of the uncertain inflow are plotted in Fig. 9d. The posterior under uncertainty, denoted by \(q(E,\nu |Y_{\textrm{obs},C})={\mathbb {E}}_{\dot{V}_{\textrm{in}}}\left[ p\left( E,\nu |\dot{V}_{\textrm{in}},Y_{\textrm{obs},C}\right) \right] \), was then calculated according to (3b) by incorporating the average effect of the uncertainty of the inflow \(\dot{V}_{\textrm{in}}\sim p\left( \dot{V}_{\textrm{in}}\right) \). In Fig. 9d it can be seen that, as it should be expected, the additional consideration of uncertainty of the inflow made the posterior less expressive, such that an increase in the variance can be especially found along the dimension of the Young’s modulus.

The consideration of an uncertain boundary condition has also moved the point estimates. The MAP estimate for both inputs \({\varvec{x}}\) moves to lower values of \({\varvec{x}}_{\textrm{MAP}}=[E\approx 322\,\textrm{Pa},\; \nu \approx 0.09]^{\textsf{T}} \) than in the uniform prior example without uncertainties. The posterior mean is \({\varvec{x}}_{\textrm{PM}}= [E\approx 416\,\textrm{Pa},\; \nu \approx -0.09]^{\textsf{T}}\). This is expected as the mean of the assumed inflow distribution is lower than the value used for data generation and therefore lower stiffness leads to a better suiting deformation.

Remark 8

(MAP estimation after marginalization) With the consideration of uncertain parameter \(\varvec{\theta }(=\dot{V}_{\textrm{in}}) \) the MAP estimate is more difficult to obtain than without uncertainties. MAP determination is more difficult because the extended posterior (3b) must first be integrated to the posterior under uncertainty. Although this integration can easily be evaluated using the particle representation (13), the maximum of the posterior under uncertainty cannot be found without another assumption. We chose a histogram approach to collect the SMC particles in squared bins with 30 intervals per input variable \({\varvec{x}}\) according to the illustration in Fig. 9d. The trick of picking the particle that scored the highest posterior in the last SMC iteration does not work for the posterior under uncertainty or marginal distributions, as their determination first needs an integration step.

The global Gaussian approximation is more isotrop when considering the uncertainty. This is represented as a Gaussian approximation that has more circular percentile lines. In the interpretation of the covariance of the posterior, this means that there is no prevalent direction in this posterior. Compared to the posterior for the fixed boundary condition, the posterior including the uncertainty \(q(E,\nu |Y_{\textrm{obs},C})={\mathbb {E}}_{\dot{V}_{\textrm{in}}}\left[ p\left( E,\nu |\dot{V}_{\textrm{in}},Y_{\textrm{obs},C}\right) \right] \) is less restrictive by means of necessary assumptions and therefore also less stiff in the results. This gives also more flexibility in the results as there is a broader range of parameters to explain the observed experimental results. Nevertheless, better knowledge about uncertainties \(p\left( \dot{V}_{\textrm{in}}\right) \) can greatly improve the calibration result.

In case non-controllable aleatory uncertainty is present, neglecting it will lead to an overconfident, wrong posterior as the analyst introduces a modeling error by neglecting these effects. Incorporating aleatory uncertainty in the probabilistic model will generally introduce more uncertainty to the posterior (the distribution widens) and might potentially even completely change the characteristics of the posterior distribution.

Calibration of heterogeneous biofilm model under uncertain inflow boundary condition

In our last example we want to calibrate material properties of a heterogeneous biofilm FSI model as in [12], but here under an uncertain inflow rate boundary condition. Heterogeneity comes into play in such problems due to different age and/or different nutrient availability of different parts of the domain. Hence, it was a natural choice to also shed some light on a more demanding case involving more parameters. The three subdomains are depicted in Fig. 11 and lead to six input parameters, consisting of the Young’s modulus and Poisson’s ratio of each subdomain.

Fig. 11
figure 11

Subdomains for heterogeneous biofilm model

Fig. 12
figure 12

Error for GP regression model with \(n_{\textrm{test}}= 100\) test points over number of training data points \(n_\textrm{train}\)

The parameters \({\varvec{x}}_{\textrm{gt}}=[E_1 = 500 \,\textrm{Pa},\; \nu _1 = 0.2,\;E_2 = 200 \,\textrm{Pa},\; \nu _2 = 0.1,\; E_3 = 1000 \,\textrm{Pa},\; \nu _3 = 0.3]^{\textsf{T}}\) for a hyperelastic Saint-Venant-Kirchhoff material model are used as the ground truth inputs, along with the inflow rate \(\dot{V}_{\textrm{in}}=100\,{\mathrm {mm^2}/\textrm{s}}\). All other model details are the same as for the previous examples, to then generate the synthetic experimental data \(Y_{\textrm{obs},C}\) for this heterogeneous case. Please note that in this example we additionally assume noise polluted measurement data according to

$$\begin{aligned} \begin{aligned} Y_{\textrm{obs},C}&= \mathfrak {M}\left( \varvec{x}_{\text {gt}},\varvec{\theta }_{\text {gt}},C\right) + \sigma _{\text {n,obs}}\cdot \varvec{\epsilon }\\ \text {with } \varvec{\epsilon }&\sim \mathcal {N}\left( \varvec{0},{I}\right) \end{aligned} \end{aligned}$$
(16)

For the following example we choose noise with a standard deviation of \(\sigma _{\text {n,obs}}=1\,\mathrm {\mu m}\). In remark 9 we comment on the determination of the GP surrogate for this example.

Remark 9

(Convergence of the Gaussian process surrogate) As the GP surrogate model for the likelihood is now dependent on six input variables plus one uncertain variable, more training data is necessary to reach acceptable accuracy of the posterior mean function of the GP. A small convergence study was performed to find an appropriate training size. Therefore we successively increased \(n_\textrm{train}\) according to the Sobol sequence and then calculated the \(\text {L}_2\)-error norm between the posterior mean of the GP and the likelihood evaluated with the forward model at \(n_{\textrm{test}}=100\) testing points that are fixed consecutive samples from later in the Sobol sequence and unused in the training data. This is a representative choice as the good space filling property leads to test points which are well distributed. The result of the convergence study is plotted in Fig. 12. The tested scenario is a kernel with multiple length scales with a multiplicative coupling [39, 53] as used in the following example. The parameters are expected and also observed to have different influence on the likelihood. Therefore different length scales and variances in a multiplicative coupling of Matérn 2/3 kernels for each parameter dimension is used. As opposed to the L-BFGS-B optimizer used in previous examples, a scaled conjugate gradient optimizer is used for training for stability reasons.

Figure 12 shows that between \(n_\textrm{train}=1000\) and \(n_\textrm{train}=5000\) no significant improvement in the given error measure can be achieved. That is why the evaluation with \(n_\textrm{train}=2000\) data points is chosen for the following example as a compromise between efficiency and accuracy. In general asymptotic convergence can be expected, meaning that the error measure will go to zero for an infinite amount of training points. For \(n_\textrm{train}=500\) an increase in the error can be detected. Here a new characteristic in the likelihood function was introduced with the training points between \(n_\textrm{train}=200\) and \(n_\textrm{train}=500\), that lead to the increased deviation at the test points.

For the SMC approach a set of 10,000 particles with 30 rejuvenation steps was used with a step size control parameter of \(\zeta = 0.998\). Just as a comparison to the \(n_\textrm{train}=2000\) training points used, a full evaluation in every rejuvenation step of the SMC algorithm would require three million (\(10,000\cdot 30\cdot 10= 3,000,000\)) forward model evaluations for exemplary 10 SMC steps. This is neither desirable nor feasible if directly applied to the expensive forward model. After the proof of concept in previous examples, with two input parameters and potentially one uncertain parameter, it must be emphasized, that volume integrals for respective marginals and especially the normalization of the posterior (1) in six plus one dimensions scale even worse. Just for a moderately sized, grid-based discretization with 100 sample points in every dimension we would get \(100^7\) necessary evaluations of the GP-mean. So we make use of the good scalability of the SMC algorithm in higher dimensions in the following examples.

As already indicated above, in this last example we want to show the full capability of the approach and therefore we use an example with uncertain inflow rate \(\dot{V}_{\textrm{in}}\) and added noise to the data generated from the ground truth values. The assumed distribution on \(\dot{V}_{\textrm{in}}\) is the same as in Fig. 10 above with a beta-distribution \(\mathcal {B}\left( \dot{V}_{\textrm{in}}|a=2.6,b=1.4\right) \) between \(0 \,{\mathrm {mm^2}/\textrm{s}} \) and \( 110 \,{\mathrm {mm^2}/\textrm{s}} \) with its mode at \(88 \,{\mathrm {mm^2}/\textrm{s}}\). The MAP approximation is found at \({\varvec{x}}_{\textrm{MAP}}=[E_1 \approx 345 \,\textrm{Pa},\; \nu _1 \approx 0.43,\;E_2 \approx 135 \,\textrm{Pa},\; \nu _2 \approx 0.30,\; E_3 \approx 1034 \,\textrm{Pa},\; \nu _3 \approx -0.47]^{\textsf{T}}\) and the global posterior mean computes to \({\varvec{x}}_{\textrm{PM}}= [E_1 \approx 443 \,\textrm{Pa},\; \nu _1 \approx -0.15,\;E_2 \approx 431 \,\textrm{Pa},\; \nu _2 \approx -0.13,\; E_3 \approx 637 \,\textrm{Pa},\; \nu _3 \approx -0.14]^{\textsf{T}}\). Analogously to the lower dimensional example, the MAP estimate is obtained with a binning approach with 10 intervals in each input dimension. Therefore it is a rough estimate. Nevertheless, the curse of dimensionality inhibits an excessively fine grid. The maximum of the posterior under uncertainty qualitatively represents the ground truth with \(E_3>E_1>E_2 \) and \(\nu _1 > \nu _2\). Only for \(\nu _3\), it does not correspond to the ground truth. This can be a consequence of the relevance of the parameter to the overall posterior, as the lateral contraction in the stiffest subdomain, attached to the ground also has little effect on the interface deformation. In challenging examples it can be a good idea to first perform a global sensitivity analysis [54, 55] and then focus the inverse analysis only on the most sensitive parameters. It also shows again, that the problem setup is challenging as the volume inflow rate is used as an uncertain parameter. But this is also often the case in real-world applications. This has the effect that the MAP parameters are those of a less stiff biofilm material, than the ones used for the ground truth simulation.

The resulting one dimensional marginals over the parameters, for which uniform priors were assumed, are plotted in Fig. 13 as histograms with KDE approximations in black. Here the letter \(q({\varvec{x}}|Y_{\textrm{obs},C})\) is used because the distributions represent marginals of the posterior under uncertainty (3b).

Fig. 13
figure 13

Marginals of the posterior for a–f individual parameters in heterogeneous example with uncertain inflow condition with added Gaussian noise and uniform prior as red dashed line. Marginals as histograms (blue) with a KDE approximation (black)

It can be observed that most of these marginals show a rather uniform distribution. In this example, this happens as they represent the integration of the posterior with respect to all other parameters respectively and therefore the probability mass is accumulated herein. Only for subdomain \(\Omega _2\) (in Fig. 13c and d) the posterior marginals acummulate density around the ground truth \(E_2 \approx 200\,\textrm{Pa}\) and \(\nu _2 > 0\). As these single-dimensional marginals are not unveiling the full complexity of the posterior, a further step is made to show two-dimensional marginals for a selected combination of parameters in Fig. 14 as hexagonal histograms with the percentile lines of the KDE approximations. Therein the interaction effects between the respective combination of parameters can be seen. Especially within the regions enclosed by the percentile lines of \(10\%\) the most probable parameter combinations for the respective marginals can be found.

Fig. 14
figure 14

Hexagonal histograms (high weight in dark blue) for marginals for selected pairs of parameters in heterogeneous example with uncertain inflow condition and added Gaussian noise. Percentile lines of KDE approximations in black

The marginal posterior distributions in Fig. 14 show complex densities. As compared to a deterministic calibration approach for the same example [12], where even the less complex case with a fixed volume inflow rate \(\dot{V}_{\textrm{in}}\) could not be solved, this complex character can be well represented here in the obtained solution. Furthermore, for Fig. 14a and c it can be seen that \(E_2 \) dominates the marginal posteriors that are high around \(E_2 \approx 200 \,\textrm{Pa}\) for a plausible range of the other Young’s moduli \(E_1,E_3\). Also the shape of the distribution for \(\Omega _2 \) in Fig. 14e resembles the posterior shape in Fig. 9d, which is the one for homogeneous example with uniform prior and uncertain inflow. This means that parameters in \(\Omega _2\) qualitatively have similar influence in this marginal, as the two parameters have in the homogeneous example in the posterior. This is also expected as \(\Omega _2 \) takes up the highest portion of the biofilm domain (see Fig. 11) and therefore dominates the deformation.

As we have six parameters involved, a full plot of the posterior is impossible. Alternatively, a parallel axis plot as in Fig. 15 can be used to get an idea of the underlying posterior shape.

Fig. 15
figure 15

Parallel axis plot over all parameters, posterior values \(p\left( {\varvec{x}}|\varvec{\theta },Y_{\textrm{obs},C}\right) \) and inflow rate for all particles of the SMC particle approximation of the posterior with uniform prior assumptions for \({\varvec{x}}\)

Over the first six axis we see the input parameters for the range of interest. The seventh axis shows the extended posterior values for the 10,000 particles from the SMC particle approximation. The last axis represents the uncertain inflow boundary condition \(\dot{V}_{\textrm{in}}\). Every line connecting the axis represents one particle of the SMC solution and all lines are colored by their absolute extended posterior density value. The lines are drawn on top of each other, starting with low posterior density values in blue and ending with the highest ones in red. Most of the parameter combinations with high posterior density accumulate around \(E_2 \approx 200\,\textrm{Pa}\) and \(\nu _1, \nu _2 > 0\). The other three parameters \(E_1, E_3, \nu _3\) seem to score relatively high posterior values for the whole range of interest. That means they are individually less significant to a specific value of the posterior density. Given the subdomain topology, as seen in Fig. 11, these posterior results can also be expected as \(\Omega _2 \) takes up the largest portion of the biofilm and therefore can also be expected to have the highest influence on the deformation in this case.

The possibility for such a phenomenological interpretation of the posterior is a very attractive feature of tackling the inverse problem via the proposed approach, as not only good point estimates can be concluded, but more importantly the influence of the individual parameters on the posterior can be observed and interpreted. The information revealed in such a probablistic analysis, as for example displayed in Figs. 14 and 15, gives indeed a lot of insight into the problem at hand. Among others, it also shows how useful given experimental information is or whether other measurements are needed for the identification of relevant parameters. With the rich information from such an analysis also an estimation of the plausibility and stability of the point estimates can be made. As an example, a deterministic or trial and error approach might end up in identifying negative Poisson ratios for biofilms (easily understandable when looking at Figs. 9 or 15) and hence identifying biofilms as auxetic materials, as it has been done in the past. The probabilistic analysis would immediately show the lack of validity for such a conclusion.

Conclusion

We presented a robust and efficient Bayesian calibration approach for coupled computational mechanics models with given boundary or interface deformations. We considered the particularly challenging case, where only interface shapes based on images of the domain at different points in time or different experimental conditions are obtainable but no displacements of material points. Lacking such displacements, we also introduced and compared several metrics to calculate the discrepancy between interface shapes. We also considered the additional challenge to incorporate the influence of uncontrollable uncertainties in the experimental setup, such as uncertain boundary conditions

In contrast to deterministic optimization approaches for calibration tasks, Bayesian calibration is mathematically much more robust. This is because it is formulated in a probabilistic manner that describes the problem globally in form of a so-called posterior distribution. An optimization problem only results in one particular point estimate for the parameters instead. The latter is prone to get stuck in local optima which might not lead to satisfactory results.

The Bayesian calibration approach also allows for a more meaningful interpretation of the calibration result. The posterior density can be explored using a sequential Monte Carlo approach. This also allows for a convenient computation of the involved high-dimensional integrals or point estimates such as the maximum a posteriori (MAP) estimate. The posterior distribution gives rise to several further expressive point estimates, uncertainty and robustness measures, which help to get a deeper understanding of the parameters’ effect on the solution.

We have observed, that already for very few parameters, approximations, e.g., based on point estimates as the MAP and the Laplace approximation, cannot live up to the complexity of the resulting posteriors. Therefore, a full approximation of the posterior appears to be a necessary approach to studied inverse problems. That allows to gain further insight into the relevance of the parameters and the interaction of parameters in the model. Thereby a better understanding of the computational forward model in relation to the observed results of an experiment can be obtained.

To control the computational cost of the approach we proposed to build a Gaussian process (GP) surrogate model on the log-likelihood. The computationally most expensive step in the presented approach is the generation of the training data for the construction of the GP surrogate. The required forward model evaluations can however be computed in parallel and their exact amount is open to the analyst’s choice. By constructing the log-likelihood surrogate, the approach is robust against single failing forward model evaluations, as it can be built on the remaining successful runs.

We presented and tested three different types of discrepancy measures between interfaces when no comparison of displacements of material points is possible. At first, an Euclidean distance measure at selected measurement points at characteristic locations on the interface was used. It is expected to be the easiest measure to apply to real data, as no full representation of the deformed geometry is required. Second, we used closest point projection distances for all interface nodes from the finite element discretization as a comparative measure. Eventually, the RKHS norm measure led to the most expressive posterior and is in general appealing because of its solid mathematical foundation and flexibility. Nevertheless, all measures yielded suitable likelihood distributions. A proper choice should be made based on the characteristics and quality of the experimental data. The choice of the distance measure is also dependent on the type of calibration task and on the question of which aspect of the deformation the analyst wants to emphasize.

For the demonstration of our calibration approach, we chose the material parameter identification of a spatially two-dimensional fluid–structure interaction (FSI) problem including a homogeneous hyperelastic solid biofilm model. We used artificial reference data such that we know the ground truth in order to be able to focus the attention on the characteristics of the proposed approach. We showed that the point estimates in the examples with two calibrated parameters were close to the ground truth \({\varvec{x}}_{\textrm{gt}}\) and the approximation of the full posterior allowed to observe the strong coupling between the input parameters in their influence on the posterior density. The posterior solution allowed us to observe the interaction of the two parameters in the model which in the present case was an almost linear relationship between \(E\) and \(\nu \) for which high posterior densities occurred in this setting. We furthermore studied the influence of uncertain experimental conditions on the posterior density. Such uncertainty led to a slightly flatter and therefore less expressive posterior such that the variance of the posterior increased and the posterior density filled the design space more equally. Besides the expected increase in variance also the expectation and MAP estimate deviated. Compared to the previous case, where the additional uncertainty was neglected, this shows that it is important to consider existing uncertainties. Otherwise the posterior might be overconfident and might have entirely different characteristics.

As a further example, we investigated a more complex calibration problem of a heterogeneous biofilm model with six unknown material parameters under uncertainty that also included artificial measurement noise. In this challenging example, the MAP estimate deviates more from the ground truth because the problem has more degrees of freedom and additional uncertainty and noise. This example, that could not be solved at all for an easier setting without uncertainty with a deterministic approach in [12], could be solved and interpreted with the approach presented herein, hinting to higher robustness of the presented approach.

While the examples have been chosen from a particular field of application, the approach is general and can be applied to all sorts of single-field or coupled multi-field calibration problems with given boundary or interface deformations, where boundary deformations are available, but no point-to-point correspondence between simulation results and experimental observations can be found. This means that the presented approach can be used without limitations also for spatially three-dimensional models and more complex and expensive forward models.

In high parameter dimensions, other methods like variational Bayes approaches or multi-fidelity approaches such as [56] are promising alternatives.

Availability of data and materials

The research code is hosted on a private GitLab repository at Leibniz Rechenzentrum (LRZ) in Garching. The generated numerical results and digital data are held on machines that are backed up on servers managed by the LRZ in Garching. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Bilionis I, Zabaras N. Solution of inverse problems with limited forward solver evaluations: a Bayesian perspective. Inverse Probl. 2013;30(1): 015004. https://doi.org/10.1088/0266-5611/30/1/015004.

    Article  MathSciNet  MATH  Google Scholar 

  2. Kennedy MC, O’Hagan A. Bayesian calibration of computer models. J Royal Stat Soc Ser B. 2001;63(3):425–64. https://doi.org/10.1111/1467-9868.00294.

    Article  MathSciNet  MATH  Google Scholar 

  3. Tarantola A. Inverse problem theory and methods for model parameter estimation. Philadelphia: SIAM Society for Industrial and Applied Mathematics; 2005.

    Book  MATH  Google Scholar 

  4. Moireau P, Chapelle D, Tallec PL. Filtering for distributed mechanical systems using position measurements: perspectives in medical imaging. Inverse Probl. 2009;25(3): 035010. https://doi.org/10.1088/0266-5611/25/3/035010.

    Article  MathSciNet  MATH  Google Scholar 

  5. Sermesant M, Moireau P, Camara O, Sainte-Marie J, Andriantsimiavona R, Cimrman R, Hill DLG, Chapelle D, Razavi R. Cardiac function estimation from MRI using a heart model and data assimilation: advances and difficulties. Med Image Anal. 2006;10(4):642–56. https://doi.org/10.1016/j.media.2006.04.002.

    Article  MATH  Google Scholar 

  6. Kehl S, Gee MW. Calibration of parameters for cardiovascular models with application to arterial growth. Int J Numer Methods Biomed Eng. 2016;33(5):2822. https://doi.org/10.1002/cnm.2822.

    Article  MathSciNet  Google Scholar 

  7. Flemming H-C, Wingender J. The biofilm matrix. Nat Rev Microbiol. 2010;8(9):623–33. https://doi.org/10.1038/nrmicro2415.

    Article  Google Scholar 

  8. Böl M, Ehret AE, Albero AB, Hellriegel J, Krull R. Recent advances in mechanical characterisation of biofilm and their significance for material modelling. Crit Rev Biotechnol. 2013;33(2):145–71. https://doi.org/10.3109/07388551.2012.679250.

    Article  Google Scholar 

  9. Gloag ES, Fabbri S, Wozniak DJ, Stoodley P. Biofilm mechanics: implications in infection and survival. Biofilm. 2020;2: 100017. https://doi.org/10.1016/j.bioflm.2019.100017.

    Article  Google Scholar 

  10. Boudarel H, Mathias J-D, Blaysat B, Grédiac M. Towards standardized mechanical characterization of microbial biofilms: analysis and critical review. npj Biofilms Microbiomes. 2018;4:17. https://doi.org/10.1038/s41522-018-0062-5.

    Article  Google Scholar 

  11. Picioreanu C, Blauert F, Horn H, Wagner M. Determination of mechanical properties of biofilms by modelling the deformation measured using optical coherence tomography. Water Res. 2018;145:588–98. https://doi.org/10.1016/j.watres.2018.08.070.

    Article  Google Scholar 

  12. Willmann H, Wall WA. Inverse analysis of material parameters in coupled multi-physics biofilm models. Adv Model Simul Eng Sci. 2022. https://doi.org/10.1186/s40323-022-00220-0.

    Article  Google Scholar 

  13. Wagner M, Taherzadeh D, Haisch C, Horn H. Investigation of the mesoscale structure and volumetric features of biofilms using optical coherence tomography. Biotechnol Bioeng. 2010;107(5):844–53. https://doi.org/10.1002/bit.22864.

    Article  Google Scholar 

  14. Blauert F, Horn H, Wagner M. Time-resolved biofilm deformation measurements using optical coherence tomography. Biotechnol Bioeng. 2015;112(9):1893–905. https://doi.org/10.1002/bit.25590.

    Article  Google Scholar 

  15. Gierl L, Stoy K, Faíña A, Horn H, Wagner M. An open-source robotic platform that enables automated monitoring of replicate biofilm cultivations using optical coherence tomography. npj Biofilms Microbiomes. 2020;6:18. https://doi.org/10.1038/s41522-020-0129-y.

    Article  Google Scholar 

  16. Jackson BD, Connolly JM, Gerlach R, Klapper I, Parker AE. Bayesian estimation and uncertainty quantification in models of urea hydrolysis by E. coli biofilms. Inverse Probl Sci Eng. 2021;29(11):1629–52. https://doi.org/10.1080/17415977.2021.1887172.

    Article  MathSciNet  MATH  Google Scholar 

  17. Robert CP. The Bayesian choice. New York: Springer; 2007. https://doi.org/10.1007/0-387-71599-1.

    Book  Google Scholar 

  18. Sternfels R, Koutsourelakis P-S. Stochastic design and control in random heterogeneous materials. Int J Multiscale Comput Eng. 2011;9(4):425–43. https://doi.org/10.1615/IntJMultCompEng.v9.i4.60.

    Article  Google Scholar 

  19. Koutsourelakis PS. Design of complex systems in the presence of large uncertainties: a statistical approach. Comput Methods Appl Mech Eng. 2008;197(49–50):4092–103. https://doi.org/10.1016/j.cma.2008.04.012.

    Article  MATH  Google Scholar 

  20. Kaipio J, Somersalo E. Stat Comput Inverse Probl. New York: Springer; 2004. https://doi.org/10.1007/b138659.

    Book  Google Scholar 

  21. Bishop CM. Pattern recognition and machine learning. New York: Springer; 2011.

    MATH  Google Scholar 

  22. Aronszajn N. Theory of reproducing kernels. Trans Am Math Soc. 1950;68(3):337–337. https://doi.org/10.1090/S0002-9947-1950-0051437-7.

    Article  MathSciNet  MATH  Google Scholar 

  23. Vaillant M, Glaunès J, Christensen G, Sonka M. Surface matching via currents. In: Information Processing in Medical Imaging, pp. 381–392. Springer, Berlin, Heidelberg 2005.

  24. Imperiale A, Routier A, Durrleman S, Moireau P. Improving efficiency of data assimilation procedure for a biomechanical heart model by representing surfaces as currents. In: Ourselin S, Rueckert D, Smith N, editors. Functional imaging and modeling of the heart. Berlin: Springer; 2013. p. 342–51. https://doi.org/10.1007/978-3-642-38899-6_41.

    Chapter  Google Scholar 

  25. Geyer CJ. Practical markov chain monte carlo. Statistical science, 1992; 473–483.

  26. Chib S, Greenberg E. Understanding the metropolis-hastings algorithm. Am Stat. 1995;49(4):327–35. https://doi.org/10.1080/00031305.1995.10476177.

    Article  Google Scholar 

  27. Glynn PW, Iglehart DL. Importance sampling for stochastic simulations. Manag Sci. 1989;35(11):1367–92. https://doi.org/10.1287/mnsc.35.11.1367.

    Article  MathSciNet  MATH  Google Scholar 

  28. Tokdar ST, Kass RE. Importance sampling: a review. WIREs Comput Stat. 2010;2(1):54–60. https://doi.org/10.1002/wics.56.

    Article  Google Scholar 

  29. Doucet A, Freitas N, Gordon N, editors. Sequential Monte Carlo methods in practice. New York: Springer; 2001. https://doi.org/10.1007/978-1-4757-3437-9.

    Book  MATH  Google Scholar 

  30. Chopin N, Papaspiliopoulos O. An Introduction to Sequential Monte Carlo. Cham: Springer; 2020. https://doi.org/10.1007/978-3-030-47845-2.

    Book  MATH  Google Scholar 

  31. Blei DM, Kucukelbir A, McAuliffe JD. Variational inference: a review for statisticians. J Am Stat Assoc. 2017;112(518):859–77. https://doi.org/10.1080/01621459.2017.1285773.

    Article  MathSciNet  Google Scholar 

  32. Hoffman MD, Blei DM, Wang C, Paisley J. Stochastic variational inference. J Mach Learn Res. 2013;14(4):1303–47.

    MathSciNet  MATH  Google Scholar 

  33. Del Moral P, Doucet A, Jasra A. Sequential Monte Carlo for Bayesian computation. In: Bernardo JM, Bayarri MJ, Berger JO, Dawid AP, Heckerman D, Smith AFM, West M, editors. Bayesian Statistics 8. Oxford: Oxford University Press; 2007. p. 1–34.

    MATH  Google Scholar 

  34. Koutsourelakis PS. A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters. J Comput Phys. 2009;228(17):6184–211. https://doi.org/10.1016/j.jcp.2009.05.016.

    Article  MATH  Google Scholar 

  35. Chopin N. A sequential particle filter method for static models. Biometrika. 2002;89(3):539–52. https://doi.org/10.1093/biomet/89.3.539.

    Article  MathSciNet  MATH  Google Scholar 

  36. Del Moral P, Doucet A, Jasra A. Sequential Monte Carlo samplers. J Royal Stat Soc Ser B. 2006;68(3):411–36. https://doi.org/10.1111/j.1467-9868.2006.00553.x.

    Article  MathSciNet  MATH  Google Scholar 

  37. Minson SE, Simons M, Beck JL. Bayesian inversion for finite fault earthquake source models i-theory and algorithm. Geophys J Int. 2013;194(3):1701–26. https://doi.org/10.1093/gji/ggt180.

    Article  Google Scholar 

  38. Robert CP, Casella G. Monte Carlo statistical methods. New York: Springer; 2004. https://doi.org/10.1007/978-1-4757-4145-2.

    Book  MATH  Google Scholar 

  39. Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning. MIT Press Ltd, Cambridge, Massachusetts 2006. first (University of Cambridge) second (University of Edinburgh).

  40. Sobol I. The distribution of points in a cube and the accurate evaluation of integrals (in Russian). Zh Vychisl Mat i Mater Phys. 1967;7:784–802.

    Google Scholar 

  41. Owen AB. Scrambling Sobol’and niederreiter-xing points. J Complex. 1998;14(4):466–89. https://doi.org/10.1006/jcom.1998.0487.

    Article  MathSciNet  MATH  Google Scholar 

  42. Kucherenko S, Albrecht D, Saltelli A. Exploring multi-dimensional spaces: a Comparison of Latin Hypercube and Quasi Monte Carlo Sampling Techniques 2015. arXiv:1505.02350.

  43. Biehler J, Nitzler J, Wall WA, Gravemeier V. QUEENS - A Software Platform for Uncertainty Quantification, Physics-Informed Machine Learning, Bayesian Optimization, Inverse Problems and Simulation Analytics: User Guide. \({\rm AdCoEngineering}^{GW}\) 2019.

  44. GPy: GPy: A Gaussian process framework in python. http://github.com/SheffieldML/GPy, Last accessed on 2021-08-20 (since 2012)

  45. Schroeder W, Martin K, Lorensen B. The Visualization Toolkit: an Object-oriented Approach to 3D Graphics. Clifton Park, N.Y: Kitware; 2006.

    Google Scholar 

  46. Waskom ML. seaborn: statistical data visualization. J Open Source Softw. 2021;6(60):3021. https://doi.org/10.21105/joss.03021.

    Article  Google Scholar 

  47. Hunter JD. Matplotlib: a 2d graphics environment. Comput Sci Eng. 2007;9(3):90–5. https://doi.org/10.1109/MCSE.2007.55.

    Article  Google Scholar 

  48. Plotly Technologies Inc.: Collaborative Data Science. https://plot.ly

  49. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, De Vito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S. Pytorch: An imperative style, high-performance deep learning library. In: Wallach H, Larochelle H, Beygelzimer A, d’ Alché-Buc F, Fox E, Garnett R. (eds.) Advances in Neural Information Processing Systems 2019;32:8024–8035. Curran Associates, Inc., Vancouver, BC, Canada.

  50. BACI: a comprehensive multi-physics simulation framework. https://baci.pages.gitlab.lrz.de/website/. Accessed 16 June 2021.

  51. Coroneo M, Yoshihara L, Wall WA. Biofilm growth: a multi-scale and coupled fluid-structure interaction and mass transport approach. Biotechnol Bioeng. 2014;111(7):1385–95. https://doi.org/10.1002/bit.25191.

    Article  Google Scholar 

  52. Schölkopf B, Smola A, MÜller K-R. Kernel principal component analysis. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) Artificial Neural Networks - ICANN’97, 1997; vol. 1327, pp. 583–588. Springer, Berlin Heidelberg. https://doi.org/10.1007/BFb0020217.

  53. Duvenaud D. Automatic model construction with gaussian processes. PhD thesis, University of Cambridge 2014.

  54. Brandstaeter S, Fuchs SL, Biehler J, Aydin RC, Wall WA, Cyron CJ. Global sensitivity analysis of a homogenized constrained mixture model of arterial growth and remodeling. J Elast. 2021;145(1–2):191–221. https://doi.org/10.1007/s10659-021-09833-9.

    Article  MathSciNet  MATH  Google Scholar 

  55. Wirthl B, Brandstaeter S, Nitzler J, Schrefler BA, Wall WA. Global sensitivity analysis based on gaussian-process metamodelling for complex biomechanical problems 2022. arXiv:2202.01503.

  56. Nitzler J, Biehler J, Fehn N, Koutsourelakis P-S, Wall WA. A generalized probabilistic learning approach for multi-fidelity uncertainty quantification in complex physical simulations. Comput Methods Appl Mech Eng. 2022;400: 115600. https://doi.org/10.1016/j.cma.2022.115600.

    Article  MathSciNet  MATH  Google Scholar 

  57. Wagner M, Horn H. Optical coherence tomography in biofilm research: a comprehensive review. Biotechnol Bioeng. 2017;114(7):1386–402. https://doi.org/10.1002/bit.26283.

    Article  Google Scholar 

  58. Gee MW, Küttler U, Wall WA. Truly monolithic algebraic multigrid for fluid-structure interaction. Int J Numer Methods Eng. 2010;85(8):987–1016. https://doi.org/10.1002/nme.3001.

    Article  MathSciNet  MATH  Google Scholar 

  59. Yoshihara L, Coroneo M, Comerford A, Bauer G, Klöppel T, Wall WA. A combined fluid-structure interaction and multi-field scalar transport model for simulating mass transport in biomechanics. Int J Numer Methods Eng. 2014;100(4):277–99. https://doi.org/10.1002/nme.4735.

    Article  MathSciNet  MATH  Google Scholar 

  60. Küttler U, Gee M, Förster C, Comerford A, Wall WA. Coupling strategies for biomedical fluid-structure interaction problems. Int J Numer Methods Biomed Eng. 2010;26(3–4):305–21. https://doi.org/10.1002/cnm.1281.

    Article  MathSciNet  MATH  Google Scholar 

  61. Taherzadeh D, Picioreanu C, Küttler U, Simone A, Wall WA, Horn H. Computational study of the drag and oscillatory movement of biofilm streamers in fast flows. Biotechnol Bioeng. 2010;105(3):600–10. https://doi.org/10.1002/bit.22551.

    Article  Google Scholar 

  62. Berlinet A, Thomas-Agnan C. Reproducing Kernel Hilbert spaces in probability and statistics. New York: Springer; 2004. https://doi.org/10.1007/978-1-4419-9096-9.

    Book  MATH  Google Scholar 

  63. Stein ML. Interpolation of spatial data. New York: Springer; 1999.

    Book  MATH  Google Scholar 

  64. Matérn B. Spatial variation. New York: Springer; 1986. https://doi.org/10.1007/978-1-4615-7892-5.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

Funding of the project with project number WA 1521/22 for this work by the German Research Foundation (DFG) is gratefully acknowledged. Furthermore, the research was partly funded by the German Research Foundation (DFG) under the priority program SPP 1886 Polymorphic uncertainty modeling for the numerical design of structures. SB gratefully acknowledges funding from the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) - Projektnummer 386349077. A base version of the software QUEENS was provided by AdCo EngineeringGW GmbH, which is gratefully acknowledged. The first implementation of Gaussian processes and further infrastructure in QUEENS done by B. Wirthl is gratefully acknowledged.

Funding

Open Access funding enabled and organized by Projekt DEAL. This work was funded by the German Research Foundation (DFG) with different projects of the authors. HW was funded with project number WA 1521/22, JN by priority program SPP 1886 Polymorphic uncertainty modeling for the numerical design of structures and SB with project number 386349077.

Author information

Authors and Affiliations

Authors

Contributions

HW implemented the workflow in QUEENS, run the simulations and generated the figures. HW and JN concepted and discussed the approach. SB implemented the SMC integration and contributed to the discussion of its application. WAW concepted the general outline of the project. All authors read and approved the manuscript.

Corresponding author

Correspondence to Harald Willmann.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

We present additional details for biofilm flow cell experiments, the continuum mechanics description of the FSI problem, further details of RKHS properties and a brief description of Gaussian processes in the appendix.

Flow cell experiments with biofilms and optical coherence tomography

This is a short introduction to flow cell experiments with biofilms. In a flow cell experiment, the biofilm is grown on the bottom of a channel with a flowing liquid that is providing nutrients to the microorganisms. The channel has dimensions like \(50\,\textrm{mm}\times 5\,\textrm{mm}\times 0.45\,\textrm{mm} \) in recent experiments [15] or \(124\,\textrm{mm}\times 2\,\textrm{mm}\times 1\,\textrm{mm} \) in previous experiments [14]. An exemplary flow cell channel is depicted in Fig. 16 in top–down view. To simplify the illustration only one single bigger patch of biofilm is drawn. After a pronounced patch of biofilm is identified in the channel, a deformation experiment is conducted with the specimen [14]. Such experiments are typical in biofilm research and hence we use them for parameter identification. Parameter identification is known to be very tricky for biofilms due to the soft consistency of the material and the necessity to keep the biofilm material in natural conditions. In an deformation experiment, mechanical load is applied onto the biofilm via the flowing liquid and then the deformation of the biofilm domain is measured.

To obtain accurate measurements of the biofilm boundary in undeformed reference configuration, the latter is first scanned using optical coherence tomography (OCT) without a significant fluid flow through the channel. Subsequently, a fluid flow is introduced into the channel at the inlet via a fluid pump, for which the volume flow rate can be controlled. We are not simply transferring this to defined load on the biofilm surface, e.g., constant shear stress, but take the full interaction between liquid and biofilm into account. This means that the resulting fluid forces acting at the immersed biofilm boundary deform the biofilm, which in return changes the surrounding fluid flow. This phenomenon is known as fluid–structure interaction (FSI) or in this specific case as fluid–biofilm interaction (FBI).

Fig. 16
figure 16

Schematic setup of a flow cell experiment with biofilms for measurement with optical coherence tomography (OCT)

The deformed biofilm is then again scanned via OCT. OCT can only be done in a scan window as sketched in Fig. 16. This is important information for our approach as it also contributes to the uncertainty of conditions. This is because upstream and downstream from the analyzed patch as well as in the attached tubes and inflow and outflow areas there will be “invisible” and unregistered biofilm patches. The resulting snapshots of the deformed and undeformed biofilm represent the data of the flow cell experiments. If required, the experiment and snapshots are repeated for different flow settings or multiple points in time. In the literature (e.g., [14, 15, 57]) one planar scan with OCT is labelled B-scan as it is a combination of one dimensional A-scans along the flow direction. By a combination of several B-scans to C-scans a three-dimensional volumetric representation of the biofilm is measured. This type of image data can then be used in the presented material parameter calibration approach.

Also due to the reduction of the whole channel to a planar view of a portion of the center of the channel, the fluid inflow boundary conditions in the planar consideration becomes prone to uncertainty. First, and as already mentioned, it is unknown if there are different unregistered biofilm patches upstream or downstream of the regarded domain. Practically the inverse analysis and biofilm modeling will focus on the biggest biofilm patch in the channel. Second, if the biofilm patch does not occupy the full channel width uniformly, fluid flow will partly pass the biofilm and therefore the average fluid flow rate cannot be assumed to flow over the patch. These considerations lead to the assumption that the flow rate in the two-dimensional model is uncertain and this type of uncertainty must be quantifiable in our approach.

Fluid–solid interaction approach for modeling biofilm mechanics

In this work we model the previously described fluid–biofilm interaction of the flow cell experiments with biofilms as a coupled mechanical two-field problem. The experiments can result in significant deformations, displacements and rotations of the initial biofilm configuration, such that a nonlinear kinematic description of the biofilm displacement field is required. The strong form of the associated differential equations for the fluid–solid interaction are briefly summarized for the fluid domain, the solid domain and the coupling of the two. For the sake of compactness, the presentation is limited to the field equations. The respective boundary conditions are implicitly assumed to be well defined. The interested reader is referred to the literature (e.g. [58, 59]) for a methodological discussion of fluid–solid interaction models and their numerical discretization with a monolithic arbitrary Lagrangian–Eulerian (ALE) approach.

Continuum description of the fluid field

The fluid field in the channel is well described by the incompressible Navier–Stokes equation for Newtonian fluids. We apply an arbitrary Lagrangian–Eulerian (ALE) description, which uses a moving fluid mesh and accounts for the resulting mesh displacements and velocities in the fluid domain, due to compatibility with the moving solid domain. The velocity of the fluid relative to the fluid mesh is then expressed by the ALE convective velocity \(\varvec{c}^{\textrm{F}}\). The fluid equations in ALE formulation read as

$$\begin{aligned} \rho ^{\textrm{F}}\dfrac{ \partial \varvec{v}^{\textrm{F}}}{\partial t}+\rho ^{{\textrm{F}}}(\varvec{c}^{\textrm{F}}\cdot \varvec{\nabla }) \varvec{v}^{\textrm{F}}-2 \mu ^{{\textrm{F}}} \varvec{\nabla }\! \cdot \!\varvec{\epsilon } (\varvec{v}^{\textrm{F}})+\varvec{\nabla }p^{\textrm{F}}&= \rho ^{{\textrm{F}}}\hat{\varvec{b}}^{\textrm{F}}{} & {} \text {in } \Omega ^{\textrm{F}}\times (0,T) \end{aligned}$$
(17a)
$$\begin{aligned} \varvec{\nabla }\! \cdot \!\varvec{v}^{\textrm{F}}&= 0{} & {} \text {in } \Omega ^{\textrm{F}}\times (0,T). \end{aligned}$$
(17b)

Wherein the variables of interest are the fluid velocity \( \varvec{v}^{\textrm{F}}\), the fluid pressure \( p^{\textrm{F}}\), the fluid density \( \rho ^{{\textrm{F}}}\), the fluid dynamic viscosity \( \mu ^{{\textrm{F}}}\) and the body force \( \hat{\varvec{b}}^{\textrm{F}}\) in the fluid domain \( \Omega ^{\textrm{F}}\times (0,T) \). The strain rate tensor \(\varvec{\epsilon }(\varvec{v}^{\textrm{F}})\) in (17) furthermore expands to the expression

$$\begin{aligned} \varvec{\epsilon }(\varvec{v}^{\textrm{F}})= \frac{1}{2}\left( \varvec{\nabla }\varvec{v}^{\textrm{F}}+ \left( \varvec{\nabla }\varvec{v}^{\textrm{F}}\right) ^{\textsf{T}}\right) . \end{aligned}$$
(18)

Continuum description of the solid field

In this work, the biofilm is modeled as a nonlinear solid. In the solid domain, the continuum field is described by the nonlinear balance of momentum in reference configuration

$$\begin{aligned} {\rho }_0^{\textrm{S}}\dfrac{\textrm{d}^2 {\varvec{d}^{{\textrm{S}}}}}{\textrm{d} t^2} = {\varvec{\nabla }}_0 \! \cdot \!(\varvec{F}\cdot \varvec{{S}}) +{\rho }_0^{\textrm{S}}{\hat{\varvec{b}}}_0^{\textrm{S}}\text {in } {\Omega }_0^{\textrm{S}}\times (0,T). \end{aligned}$$
(19)

Here, the solid displacements \( \varvec{d}^{{\textrm{S}}}\) are used as primary variable. The other quantities are the deformation gradient \(\varvec{F}\), the second Piola-Kirchhoff stress tensor \(\varvec{{S}}\) and the body force in reference configuration \({\hat{\varvec{b}}}^{\textrm{S}}\). \({\rho }_0^{\textrm{S}}\) is the solid density in reference configuration. The balance equation is formulated for the initial structure domain in reference configuration \({\Omega }_0^{\textrm{S}}\times (0,T) \).

Fluid–solid interface and coupling

The coupling of the fluid domain and the solid domain is achieved by the interface conditions on the fluid-solid interface \(\Gamma ^{{\textrm{F}}, {\textrm{S}}} \times (0,T) \). Here, the balance of tractions \(\varvec{h}^{\textrm{S}}_\Gamma \) between fluid and solid phase and a no-slip interface condition on the respective primary variables need to hold

$$\begin{aligned} \varvec{h}^{\textrm{S}}_\Gamma&=-\varvec{h}^{\textrm{F}}_\Gamma{} & {} \text {on }\Gamma ^{{\textrm{F}}, {\textrm{S}}} \times (0,T), \end{aligned}$$
(20a)
$$\begin{aligned} \dfrac{ \partial \varvec{d}^{{\textrm{S}}}}{\partial t}&=\varvec{v}^{\textrm{F}}_\Gamma{} & {} \text {on }\Gamma ^{{\textrm{F}}, {\textrm{S}}} \times (0,T). \end{aligned}$$
(20b)

Consequently, in Fig. 2 the initial biofilm interface is labeled \(\Gamma ^{{\textrm{F}}, {\textrm{S}}}_0 \) which is the same as the interface in reference configuration.

Numerical discretization of the fluid–solid interaction problem

For all of the discussed problems in this article, the numerical discretization of the governing equations is conducted with the finite element method (FEM). We use a monolithic approach to solve the coupled FSI equations in ALE formulation [58, 59], as the approach was shown to be well suited for biomedical problems of similar type [60] and has also been applied in the biofilm setting before [61].

Reproducing Kernel Hilbert space details

The reproducing kernel property implies that the inner product of the kernel and a function reproduces the original function, according to

$$\begin{aligned} \varvec{f}(\varvec{y})=\langle \varvec{f}(\varvec{x}),\varvec{k}(\varvec{x},\varvec{y})\rangle _{\mathcal {H}}. \end{aligned}$$
(21)

The kernel \(\varvec{k}(\varvec{x},\varvec{y})\) must be a positive definite function and is unique for the associated RKHS. For our investigations we furthermore require the kernel to be symmetric such that \(\varvec{k}(\varvec{x},\varvec{y})=\varvec{k}(\varvec{y},\varvec{x})\), with \(\varvec{k}(\cdot , \varvec{y})\in \mathcal {H}\). Symmetry allows us to interpret the kernel function as a correlation function in a statistical sense, which is commonly done in probabilistic machine learning [39, 53]. A kernel can be generated by an associated feature map \(\varvec{\phi }:\mathcal {X}\rightarrow \mathcal {H}\) in the sense of \(\varvec{k}(\varvec{x},\varvec{y}):=\langle \varvec{\phi }(\varvec{x}),\varvec{\phi }(\varvec{y})\rangle _{\mathcal {H}}\). The feature map \(\varvec{\phi }\) can be an infinite dimensional vector, or an infinite series, respectively. Often, only a resulting valid reproducing kernel \(\varvec{k}\) is known and the associated feature map \(\varvec{\phi }\) remains unknown (kernel trick). Loosely speaking, a desirable property of an RKHS is that a small value of the inner product of the distance function \(\varvec{d}\) implies also point-wise closeness of associated functions \(\varvec{f}_1\) and \(\varvec{f}_2\) [22, 62]. The choice of kernel function encodes the smoothness and complexity assumptions about the underlying functions in the inner product. The latter are usually given in a discrete point representation and the continuous function is represented via the kernel function according to the well known representer theorem, which directly demonstrates the interpretation of the kernel as basis functions for the underlying function or curve

$$\begin{aligned} \varvec{f}=\sum _{i=1}^{n}\alpha _i \varvec{k}(\cdot ,\varvec{x}_i). \end{aligned}$$
(22)

Therefore it can be represented by linear combinations with factors \(\alpha _i \). An extensive overview of common kernels and kernel algebra can be found in [53]. We only briefly demonstrate the radial basis function (RBF) kernel, which results in smooth, infinitely differentiable functions and is also used in our investigations

$$\begin{aligned} k\left( \varvec{x},\varvec{y}\right) = \textrm{exp}\left( - \frac{\left\| \varvec{x}-\varvec{y}\right\| _{2}^2}{2\sigma _\textrm{W}^2}\right) . \end{aligned}$$
(23)

The RBF kernel with variance \(\sigma _\textrm{W}^2 \) describes a mapping \(\mathbb {R}^{n\times n} \rightarrow \mathbb {R}\), such that it is treated as a scalar \(k(\varvec{x},\varvec{y}) \) in the following. After the presentation of the general concepts of inner products in RKHS, we will now discuss possible definitions of distance measures. For the sake of simplicity, we investigate two-dimensional parameterized curves \(\varvec{f}(\varvec{s})\), but the concepts generalize easily to three spatial dimensions and parameterized representation of surfaces. An arbitrary curve can be fully described by the parameterized vector representation in (24a). We can calculate the unit-normal vectors of the curve by differentiation w.r.t. the parameter \(\varvec{s}\) as demonstrated in (24b).

$$\begin{aligned} \varvec{f}(\varvec{s})&= \begin{bmatrix} g(\varvec{s}), &{} h(\varvec{s})\\ \end{bmatrix}^T \end{aligned}$$
(24a)
$$\begin{aligned} \varvec{n}_{\varvec{f}}(\varvec{s})&=\frac{1}{\sqrt{\left( \frac{d h(\varvec{s})}{d\varvec{s}}\right) ^2+\left( \frac{d g(\varvec{s})}{d\varvec{s}}\right) ^2}} \begin{bmatrix} -\frac{d h(\varvec{s})}{d\varvec{s}},&{} \frac{d g(\varvec{s})}{d\varvec{s}}\\ \end{bmatrix}^T \end{aligned}$$
(24b)

Given the parameterization in (24a) a natural choice for a distance is given by the inner product

$$\begin{aligned} \mathfrak {D}_{\mathcal {V},\varvec{f}}&=\langle \varvec{d_f},\varvec{d_f}\rangle , \end{aligned}$$
(25a)
$$\begin{aligned} \text {with } k\left( \varvec{f}_{1},\varvec{f}_{2}\right)&=\textrm{exp}\left( - \frac{\left\| \varvec{d_f}\right\| _{2}^2}{2\sigma _\textrm{W}^2}\right) . \end{aligned}$$
(25b)

In contrast to the closest point projection or \(\text {L}_2\) norms, inner products in RKHS correlate all discretized points of curve \(\varvec{f}_1\) with all discretized points of curve \(\varvec{f}_2\), such that it is not important to identify specific point pairs for which the measure is calculated. While, (25a) is a valid distance measure, it might not account enough for different complexities of \(\varvec{f}_1\) and \(\varvec{f}_2\) if the overall distance in the Hilbert space is small. This is why we do not follow this approach in this paper.

To put more emphasis on the difference in functional complexity, one can incorporate derivatives of the curves in the measure. Hence, another special choice of distance measure, that will also be used in this paper, can be constructed with the difference in the vector function of the normal vectors \(\varvec{n}_{\varvec{f}_1}\) and \(\varvec{n}_{\varvec{f}_2}\) from (24b), instead of the difference of the functions themselves.

Brief presentation of used Gaussian process regression model

A Gaussian process (GP) is fully defined by its mean function \( \textrm{m}\left( x\right) \) and a correlation function \(k\left( \bullet ,\bullet \right) \) (or kernel, see, e.g., [39, 53]) and describes a distribution over functions \(f(\varvec{x})\), such that for a fixed \(\hat{\varvec{x}}\), the function value \(f(\hat{\varvec{x}})\) is normally distributed according to \(f(\hat{\varvec{x}}) \sim \mathcal {N}\left( f|\textrm{m}\left( \hat{\varvec{x}}\right) ,\textrm{k}_{x}\left( \hat{\varvec{x}},{\hat{\varvec{x}}}\right) \right) \). A realization of a Gaussian process (GP) can be written as

$$\begin{aligned} f(x) \sim \mathcal{G}\mathcal{P}\left( \textrm{m}\left( \varvec{x}\right) ,{k\left( \varvec{x},\varvec{x}'\right) }\right) . \end{aligned}$$
(26)

As the name suggests, the mean function \( \textrm{m}\left( \varvec{x}\right) \) represents the statistical mean for an infinite amount of function realizations \(f_i(\varvec{x})\). The kernel or covariance function \(k\left( \varvec{x},\varvec{x}'\right) \) encodes the correlation of two function values \(f(\varvec{x})\) and \(f(\varvec{x}')\) at different inputs \(\varvec{x}\) and \(\varvec{x}'\). In analogy to the RKHS, the choice of kernel hence encodes smoothness, complexity and characteristics assumptions of the underlying function.

In case we did not account for any data \(\mathcal {D}=\{X_\textrm{train},\varvec{y}_\textrm{train}\}\) in (26), this expression can be interpreted as the so-called prior GP. The specific selection of \( \textrm{m}\left( x\right) \) and \(k\left( \varvec{x},\varvec{x}'\right) \) can be used to integrate prior knowledge when GPs are used for regression tasks. In this work we will restrict the considerations to a constant prior mean function \(\textrm{m}\left( \varvec{x}\right) =const\).

Remark 10

(Prior mean for logarithmic likelihood) For our application, the prior mean function \(\textrm{m}\left( \varvec{x}\right) \) cannot be neglected, i.e., set to zero, as the log-likelihood for Bayesian calibration (5), that we want to build the regression model for, cannot just randomly be set zero, as then the associated likelihood tends towards a finite value (\(\sim \exp {(0)}\)) far away from the training points \({\varvec{x}_\textrm{train}}_i\). Strictly the log-likelihood should tend towards negative infinity for irrelevant regions, which is unfeasible, so that the likelihood tends towards zero. Therefore we define an auxiliary mean that is lower than any occurring log-likelihood in the training data \(\textrm{m}\left( \varvec{x}\right) = \textrm{min}\left( \varvec{\mathfrak {L}}_\textrm{train}\right) - 1.0\cdot \left( \textrm{max}\left( \varvec{\mathfrak {L}}_\textrm{train}\right) -\textrm{min}\left( \varvec{\mathfrak {L}}_\textrm{train}\right) \right) \) with the log-likelihoods \(\varvec{\mathfrak {L}}_\textrm{train}\) as training outputs.

In this work we choose a Matérn 3/2 kernel function [39, 63, 64] for the regression model of the log-likelihood function. In contrast to the infinite differentiable radial basis function kernel used in (23), the Matérn 3/2 kernel is a one time differentiable covariance function [39] that does hence result in samples \(f(\varvec{x})\) with lower smoothness requirement. As the log-likelihood might potentially be peaked in areas with high posterior probability density or might also evince abrupt functional changes, we want to relax the smoothness requirements on the regressor. The Matérn 3/2 kernel function is defined as

$$\begin{aligned} \textrm{k}_{\textrm{Matern},\eta = 3/2}\left( \varvec{x},{\varvec{x}'}\right) = {\sigma _\textrm{k}^2}\left( 1+\frac{\sqrt{3} \left\| \varvec{x}-\varvec{x}'\right\| _{2}}{{l_\textrm{k}}}\right) \exp {\left( -\frac{\sqrt{3} \left\| \varvec{x}-\varvec{x}'\right\| _{2}}{{l_\textrm{k}}}\right) }. \end{aligned}$$
(27)

The hyper-parameters \({\sigma _\textrm{k}^2}\) and \({l_\textrm{k}}\) control the magnitude of the covariance and its length scale, respectively.

Typically there is training data \(\mathcal {D}=\{X_\textrm{train},\varvec{y}_\textrm{train}\}\) available and the prior GP can be conditioned on \(\mathcal {D}\) to yield the posterior GP, whose posterior mean function \(\bar{f}_{*}(\varvec{x}_{*})\) is then used as a regression model. Training data are input–output pairs of values that are known or can be determined systematically. The posterior variance function \(\mathbb {V}\left[ {f_{*}}\right] (\varvec{x}_{*})\) serves as a measure for the uncertainty in the regression model. Here it is assumed that \(\varvec{y}_\textrm{train}=\begin{bmatrix}y_1,\dots ,y_{n_\textrm{train}}\end{bmatrix}^{\textsf{T}}\) consists of \(n_\textrm{train}\) scalars (log-likelihood \(\varvec{\mathfrak {L}}_\textrm{train}\) in the presented approach) at potentially vector-valued training inputs \(X_\textrm{train}=\{{\varvec{x}_\textrm{train}}_{,i}\}\big \vert _{i=1}^{n_\textrm{train}}\) (forward model parameters \({\varvec{x}_\textrm{train}}\) in presented approach). The test point \(\varvec{x}_{*}\) denotes a new input for the prediction of the regression model. The posterior mean and variance function can be calculated from the prior GP and the training data \(\mathcal {D}\) using the following expressions [39]

$$\begin{aligned} \bar{f}_{*}&= \varvec{k}_{*}^{\textsf{T}}\left( \varvec{K}+ \sigma _\textrm{n}^2\varvec{I}\right) ^{\mathsf {-1}} \left( \varvec{y}_\textrm{train}-\varvec{\textrm{m}}\right) , \end{aligned}$$
(28a)
$$\begin{aligned} \mathbb {V}\left[ {f_{*}}\right]&= \textrm{k}_{x}\left( \varvec{x}_{*},{\varvec{x}_{*}}\right) -\varvec{k}_{*}^{\textsf{T}}\varvec{K}^{\mathsf {-1}}\varvec{k}_{*}. \end{aligned}$$
(28b)

In (28) we use the abbreviations \(\varvec{K}= k\left( X_\textrm{train},X_\textrm{train}\right) \) for the covariance matrix at training inputs and the vector \(\varvec{k}_{*}= k\left( X_\textrm{train},\varvec{x}_{*}\right) \) for kernel evaluations at test point and training data. \(\varvec{\textrm{m}}\) is a vector of suitable length \(n_\textrm{train}\) populated with the constant prior mean \(\textrm{m}\left( \varvec{x}\right) \) in all entries. The so-called nugget noise variance \(\sigma _\textrm{n}^2\) is used for numerical stability of the GP [39].

The optimization of the so-called marginal likelihood or evidence of the GP w.r.t. the hyper-parameters \({\sigma _\textrm{k}^2}, {l_\textrm{k}}\) in (27) is known as the training of the Gaussian process. The log-marginal likelihood expresses the likelihood of the training data under the chosen model parameterization and it is given by

$$\begin{aligned} \begin{aligned} - \log {p\left( \mathcal {D}|{\sigma _\textrm{k}^2}, {l_\textrm{k}}\right) } =\\= \frac{1}{2} \left( \varvec{y}_\textrm{train}-\varvec{\textrm{m}}\right) ^{\textsf{T}}\varvec{K}^{\mathsf {-1}} \left( \varvec{y}_\textrm{train}-\varvec{\textrm{m}}\right) + \frac{1}{2} \log {\left| \varvec{K}\right| } + \frac{n_\textrm{train}}{2}\log {2 \pi } \end{aligned} \end{aligned}$$
(29)

For numerical reasons one usually minimizes the negative log-marginal likelihood instead of maximizing the marginal likelihood directly.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Willmann, H., Nitzler, J., Brandstäter, S. et al. Bayesian calibration of coupled computational mechanics models under uncertainty based on interface deformation. Adv. Model. and Simul. in Eng. Sci. 9, 24 (2022). https://doi.org/10.1186/s40323-022-00237-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-022-00237-5

Keywords