Skip to main content

Efficient structural reliability analysis by using a PGD model in an adaptive importance sampling schema

Abstract

One of the most important goals in civil engineering is to guarantee the safety of the construction. Standards prescribe a required failure probability in the order of \(10^{-4}\) to \(10^{-6}\). Generally, it is not possible to compute the failure probability analytically. Therefore, many approximation methods have been developed to estimate the failure probability. Nevertheless, these methods still require a large number of evaluations of the investigated structure, usually finite element (FE) simulations, making full probabilistic design studies not feasible for relevant applications. The aim of this paper is to increase the efficiency of structural reliability analysis by means of reduced order models. The developed method paves the way for using full probabilistic approaches in industrial applications. In the proposed PGD reliability analysis, the solution of the structural computation is directly obtained from evaluating the PGD solution for a specific parameter set without computing a full FE simulation. Additionally, an adaptive importance sampling scheme is used to minimize the total number of required samples. The accuracy of the failure probability depends on the accuracy of the PGD model (mainly influenced on mesh discretization and mode truncation) as well as the number of samples in the sampling algorithm. Therefore, a general iterative PGD reliability procedure is developed to automatically verify the accuracy of the computed failure probability. It is based on a goal-oriented refinement of the PGD model around the adaptively approximated design point. The methodology is applied and evaluated for 1D and 2D examples. The computational savings compared to the method based on a FE model is shown and the influence of the accuracy of the PGD model on the failure probability is studied.

Introduction

One of the most important goals in civil engineering is to guarantee the safety of the construction. Therefore, failure probabilities \(P_f\) in the order of \(10^{-4}\) to \(10^{-6}\) in the ultimate limit state are required in standards (e.g. Eurocode 0)Footnote 1. The failure probability is mathematically defined as the integral over the failure domain of an n-dimensional joint probability density function depending on random parameters. The failure domain is implicitly given by means of the limit state function. The limit state function is usually also only implicitly given by the comparison of the resistance (e.g. the material strength) and the effect of action (e.g. the stress caused by a given loading) of the structure. Often, a numerical simulation (finite element (FE) simulation) of the structure is needed to evaluate the limit state function. For that reason, it is generally not possible to compute the failure probability analytically. Numerical methods (such as variants of Monte-Carlo simulations) are needed to estimate the failure probability. The drawback of these methods is the need for a huge number of evaluations of the implicitly given limit state function including the FE simulation of the structure for varying parameter sets. For that reasons, a full probabilistic design is often not feasible. Instead, the so-called safety factor concept (see footnote 1) is used in industrial applications. The uncertainties in loads as well as in material properties are included in partial safety factors which are given in standards. In this way, the resistance of the structure is decreased by a partial safety factor for material properties (e.g. 1.5 for steel-reinforced concrete) and the effect of action is increased by a partial safety factor considering the actions (e.g. 1.35 for permanent actions like dead load). Generally, the load-bearing capacity of the structure is underestimated using the safety factor concept as compared to a full probabilistic design. By activating these hidden capacities, the efficiency of the design can be improved with decreasing costs.

In the present paper, this problem is addressed, on the one hand, by introducing an efficient structural reliability analysis by using the advantages of model reduction techniques to reduce the computational effort of evaluating the limit state function. On the other hand, the surrogate will be used in an adaptive importance sampling method which decreases the total number of required evaluations of the limit state function. As a key idea, a goal-oriented procedure is proposed that automatically adapts the numerical computation of the failure probability based on the required accuracy. This includes in particular an adaptive refinement procedure for the surrogate evaluated within the adaptive sampling procedure.

Several numerical methods are available to estimate the failure probability of structures. Applying crude Monte-Carlo simulation, the limit state function is evaluated for a sufficiently large number of realizations of the random parameters and the number of samples within the failure domain is counted. For small failure probabilities (e.g. \(10^{-6}\)), the number of forward simulations is prohibitively large and thus computationally very expensive [1]. There are many approaches to overcome this problem based on faster approximations of the limits state function by surrogate modelling (e.g. first and second order reliability (FORM, SORM) [2] or response surface method [3]) or sampling methods which reduce the required number of evaluations (e.g. importance sampling [4]) as well as combinations. The second group are the so-called variance-reducing sampling strategies, e.g. importance sampling [4,5,6], directional sampling [7,8,9], asymptotic sampling [1, 10, 11] or subset simulation methods [12,13,14], to name a few. In the importance sampling method, an additional sampling density or weighting function is used to concentrate the samples in the most important region, i.e. close to the design point. The design point is the point on the (hyper-)surface that separates failure and save domain with the maximum probability. The difficulty of this approach is the proper choice of this sampling density function. Possible methods to define this function are e.g. the design point method [15], kernel density estimator [16] or adaptive sampling strategies [17, 18]. In the usually used design point method, the design point is estimated by solving an additional optimization problem often based on gradient methods. In contrast to that, in the adaptive strategies the important region is found in an adaptive way based on presamples [18]. The idea of directional sampling is to reduce the dimension of the failure probability integral. In this case, the one-dimensional lines of randomly chosen directions starting from the origin or mean of the joint probability function are investigated. The asymptotic sampling [1], the subset [12] as well as the spherical subset simulation [19] decompose the problem into sub-problems with a higher probability of failure than the one of the original problems. Since the number of required samples increases with decreasing failure probability, events with higher probability of failure can be estimate with lower computational effort. All these approaches still require thousands of realizations of the limit state function to estimate the failure probability. Simulating such huge numbers of the underlying FE model results in an enormous computational effort. For that reason, the application of such methods to real problems is often not feasible. Methods based on surrogate modelling address this problem. The idea of these methods is to replace the true limit state function by a surrogate that is computationally faster to evaluate. Different surrogates are discussed in the literature. In case of response surface methods, the approximation is typically done by first or second order polynomials. An application to structural reliability can be found in [3] and one to fracture mechanics in [20]. The limit state function can also be represented using an Artificial Neural Network (ANN) model [21, 22], a Kriging model [5, 23], a polynomial chaos expansion [24] as well as based on statistical learning theory [25]. These surrogate models are usually black box models that rely on a set of training samples. The training samples are full evaluations of the original limit state function used e.g. to learn the ANN model or to define the coefficients of the polynomial. The probability of failure can be estimated by means of Monte Carlo simulation after the definition of the surrogate. All above-mentioned simulation-based sampling methods can be used, in which now the surrogate is evaluated in each sample instead of the original model.

In other fields, a common way to reduce the computational effort of numerical simulations is the application of model order reduction techniques. The goal is to find the lowest dimensional system, which can capture the dominant behavior of the investigated system. Projection based model reduction methods, such as reduced basis [26,27,28] and Proper Orthogonal Decomposition (POD) [29,30,31], are one of the most popular reduced order models in structural mechanics. The key idea is to project the system onto a lower dimensional subspace and to reduce the size of the underlying discrete equation system. The approaches split the calculation into an offline and an online part. In the computationally expensive offline part, the lower dimensional subspace is defined by running unreduced test simulations with different parameter values and generating so-called snapshots (solutions) from these test simulations. In the computationally efficient online part, this subspace is used to reduce the size of the equation system. For nonlinear problems, the reduction of the computational effort of such approaches is limited, because the reduced equation system is still nonlinear. Furthermore, the evaluation of the components (stress, stiffness) must be performed over the full domain, which means at each Gauss point of the original system. Many strategies to deal with these nonlinearities are published, e.g. [32,33,34,35,36,37], to cite a few. The application of POD as surrogate combined with self-organizing maps for real-time structural assessment could be found in [38]. An application of using a reduced basis model as surrogate in a Monte Carlo sampling to estimate the failure probability was developed by [39]. A “generalization of the POD” [40] approach is the Proper Generalized Decomposition (PGD) method. By means of the PGD approach, it is possible to a-priori compute abacuses, with which all solutions of a system depending on many parameters are simultaneously given. An overview of the PGD approach can be found in [41] as well as [42] concerning their use for digital twins. The key ingredients of PGD are a separated representation of the unknown solution field, a weak formulation over the multidimensional space and an iterative solution strategy. First, the space is extended by extra coordinates representing these additional parameters. In the present paper, these parameters correspond to the random variables that define the reliability problem. Second, the solution is approximated in a separated form as a product of PGD modes depending only on a single PGD coordinate. Inserting this approach into the weak form formulated over the whole multidimensional space leads to a complex nonlinear problem—even in case of an originally linear problem. Therefore, an iterative solution method based on a fixed-point iteration scheme is used to solve the PGD problem. Different PGD solvers have been developed e.g. progressive PGD, Galerkin PGD, minimum residual formulation or a Petrov Galerkin method [40]. In the last two decades, the PGD method has been developed for a variety of applications, among these, surgery simulations [43], design optimization [44, 45], data-driven applications [46], viscoelastic [45], elastoplastic [47] as well as contact problems [48]. Nouy [49, 50] used the idea of a separated representation in his Generalized Spectral Decomposition method to reduce the computational time of the stochastic Galerkin method. The PGD approach for stochastic problems is applied to linear structural dynamics in [51]. Rubio et al. [52] coupled a Bayesian inference and a PGD model for real-time identification and model updating. Gallimard et al. [53] used a PGD model of a linear elastic structure for an efficient computation of the failure probability with FORM.

The aim of this paper is to increase the efficiency of structural reliability analysis by using the advantages of model reduction techniques and adaptive importance sampling. The key of the proposed method is the integration of an iterative goal-oriented refinement of the reduced order model in the most important region immediately controlling the accuracy of the estimate for the failure probability. The proposed coupling idea is thus an extension of similar surrogate approaches reviewed above and especially to [39, 53]. In contrast to standard surrogate models which are black box models relying on a set of training samples, a reduced order model includes the real physical behavior of the structure in the whole domain. Furthermore, it allows to perform convergence studies. Since the accuracy of the failure probability depends on the accuracy of the surrogate near the failure domain, the adaptive refinement procedure for the surrogate is an important issue often missing in the literature. In [39], a reduced basis model and Monte Carlo sampling is used whereas in [53] a PGD model is evaluated in a FORM reliability study. This paper proposes to embed a PGD model in an adaptive importance sampling approach. The advantage of using PGD instead of a projection-based model reduction is its possibility to derive a limit state function in an affine separated representation containing modes depending on the random parameters. Additionally, reduced basis, POD as well as hyper reduction approaches require a significant higher computational effort in the online phase compared to PGD. Furthermore, by combining the reduced surrogate with a variance-reducing sampling technique instead of using FORM or Monte Carlo the efficiency on the sampling side is increased. The proposed method can be used for linear as well as nonlinear limit state functions, opposite to FORM that always assumes a linear limit state function close to the design point. The proposed adaptive importance sampling algorithm minimizes the number of samples in total and a PGD reduced order model increases the computational efficiency of the evaluation at each realization point [54]. The focus of the present paper is to study the proposed method for structural reliability problems concerning convergence and errors by means of the reduced order model (mode truncation and mesh discretization) as well as the sampling technique (number of samples, approximation of design point). An iterative method is proposed to capture the different error sources using the proposed reliability algorithm. By means of this iterative PGD reliability procedure, the accuracy of the resulting failure probability is automatically verified based on an iteratively refinement of the PGD model.

The paper is structured as follow. The PGD reliability method is developed by first reviewing the general equations and definitions of computing the probability of failure. Second, the PGD method is outlined, applied to structural mechanics with random parameters as extra coordinates and the error sources are discussed. Afterwards, the adaptive importance sampling technique using the subset idea to define the importance sampling density and a PGD surrogate is described. Finally, a general iterative PGD reliability procedure is derived to take the error sources of the method concerning the reduced model as well as the sampling into account. Subsequently, the iterative PGD reliability procedure is applied to a one- and a two-dimensional structural problem. The influence of the accuracy of the PGD model against the FE model on the failure probability as well as the speed-up are discussed.

Iterative PGD reliability method

This section introduces the proposed iterative PGD reliability method. It starts with a short definition of the failure probability. Afterwards, the PGD reduction is reviewed and applied to mechanical problems. The adaptive importance sampling strategy using a PGD model is introduced. Furthermore, the errors in the coupled reliability method concerning the reduced model as well as the sampling strategy are discussed. At the end, the fully iterative PGD reliability method is derived based on a goal-oriented refinement of the reduced PGD model and an adaptive estimation of the design point.

Probability of failure

The probability of failure \(P_f\) is defined as the integral over the failure domain F over the n-dimensional joint distribution function \(f_X\) of all given random parameters \(\varvec{X}=[X_1, X_2, \dots , X_n]\)

$$\begin{aligned} P_f = \int \limits _{F} f_X(\varvec{X}) \, d\varvec{X}. \end{aligned}$$
(1)

For a review of reliability analysis, the authors refer to [2]. The limit state function \(g(\varvec{X})\) separates the random parameter space \(\Omega _{\varvec{X}}\) into a safe domain with \(g(\varvec{X}) > 0\) and a failure domain F with \(g(\varvec{X})\le 0\). In structural reliability analysis [55], the limit state function can often be written as a function of the resistance R and the corresponding effect of action E both depending on \(\varvec{X}\)

$$\begin{aligned} g(\varvec{X})=R(\varvec{X})-E(\varvec{X}). \end{aligned}$$
(2)

The resistance is the capacity of a component of the structure to resist loading conditions (actions) without failure (e.g. the material strength). The effect of action describes the effect of loading conditions on structural members (e.g. the stresses or deflections) (see footnote 1). Different definitions of the resistance and the effect of action in the limit state with respect to the failure criterion exist. Practically relevant failure criteria are the restriction of the stresses related to the strength of the material or a damage criterion for the ultimate limit state as well as the restriction of the displacements for the serviceability limit state. The effect of action \(E(\varvec{X})\) is usually only implicitly given by the numerical solution of an underlying structural model, usually a FE simulation. Following EC 0 (see footnote 1) the required failure probabilities should be in the range of \(10^{-4}\) to \(10^{-6}\) for the ultimate limit state and \(10^{-1}\) to \(10^{-3}\) for the serviceability limit state.

Proper Generalized Decomposition for structural reliability

By means of the PGD approach, it is possible to compute numerical abacuses providing solutions of complex systems for any parameter configuration within the bounds of the abacus. An overview of the PGD approach can be found in [41, 42]. The idea of PGD is to approximate a parameter dependent solution field in a separated representation of smaller dimensional components. By this, it can bypass the curse of dimensionality by transforming a multidimensional integral into products of smaller dimensional integrals. It resembles the canonical tensor decomposition, which is known as the most data sparse representation of a tensor. For more information about tensor approximations, the reader is referred to the work of [56] or [57] and their citations. In contrast to projection-based model reduction approaches (reduced basis, POD, hyper reduction), all solutions of a complex system can be computed by the evaluation of a few simple numerical functions of small dimension. This function evaluations are computationally very efficient and there is no need of solving any equation systems. In the following, the PGD ansatz is reviewed for the example of structural reliability and its specifications. Additionally, the necessity of extended convergence studies for PGD models to control the model quality is discussed.

In structural mechanics, the displacements \(\varvec{u}\) depend always on the physical space \(\varvec{x}\). In addition, they are also influenced by the model parameters such as material parameters, load configuration, geometry or boundary conditions. Classically, these model parameters are assumed to be deterministic for the computation of the displacement field. In a reliability study, some of these usually deterministic parameters become random parameters. A parameter dependent solution is needed for an efficient estimation of the failure probability. The PGD idea is to include these parameters as extra coordinates and use a separated representation of the displacement field

$$\begin{aligned} \varvec{ u}(\varvec{\kappa }) \approx \varvec{ u}^M =\sum \limits _{i=1}^{M} \prod \limits _{k=1}^{d} F_k^i(\kappa _k), \end{aligned}$$
(3)

which explicitly depends on the new parameters \(\kappa _k\). The sum in Eq. 3 consists of products of d PGD modes \(F_k^i\) depending only on one coordinate \(\kappa _k\) with \(k \in [1,d]\). The PGD coordinates \(\varvec{\kappa }\) can include all possible coordinates. In the context of a reliability analysis, \(\varvec{\kappa }\) is the space \(\varvec{x}\) enlarged with the n random parameters \(\varvec{X}\) according to Eq. 2:

$$\begin{aligned} \varvec{ u}^M(\varvec{x},X_1,X_2,\dots , X_n) = \sum \limits _{i=1}^{M} \varvec{F}_1^i(\varvec{ x}) \, F_2^i(X_1) \, F_3^i(X_2) \, F_4^i(X_3) \dots F_{n+1}(X_n). \end{aligned}$$
(4)

The dimensionality of the PGD modes is defined by their coordinates \(\varvec{x}\) or \(X_i\), i.e. the dimension of \(\varvec{F}_1^i(\varvec{ x}\)) corresponds to the spatial dimension of the original problem whereas all other modes are often one-dimensional. The PGD modes \(F_k^i\) can be defined using the variational formulation of the problem’s differential equation. For structural problems, the approximation \(\varvec{u}^M\) Eq. 3 is inserted into the balance of momentum and the weak form is built by integrating over the multidimensional space \(\Omega ^d = \bigcup \limits _{k=1}^d \Omega ^{\kappa _k}\)

$$\begin{aligned} \int _{\Omega ^d} \varvec{\sigma }(\varvec{\kappa }) \, \delta \varvec{\varepsilon }^M(\varvec{\kappa }) \,d\Omega ^d = \int _{\Omega ^d} \varvec{f}_V(\varvec{\kappa })\,\delta \varvec{u}^M(\varvec{\kappa })\,d\Omega ^d + \int _{\Gamma _t} \varvec{t}(\varvec{\kappa })\,\delta \varvec{u}^M(\varvec{\kappa })\,d\Gamma _t. \end{aligned}$$
(5)

In the last expression, \(\varvec{\sigma }\) is the stress and \(\varvec{\varepsilon }\) the strain tensor. The vector \(\varvec{f}_V(\varvec{\kappa })\) describes a volume load and \(\varvec{t}(\varvec{\kappa })\) a boundary load given by the Neumann condition \(\varvec{\sigma } \varvec{n} = \varvec{t}\) on \(\Gamma _t\). The homogeneous Dirichlet conditions are given as \(\varvec{u} = \varvec{0}\) on \(\Gamma _u\). \(\varvec{\varepsilon }^M\) gives the strains in separated representation following the kinematic equation \(\varvec{\varepsilon }^M = \text {grad}_{\text {sym}}(\varvec{u}^M)\). A (linear) material law \(\varvec{\sigma }(\varvec{\kappa }) = \varvec{C}_{elastic}(\varvec{\kappa }) : \varvec{\varepsilon }^M(\varvec{\kappa })\) completes problem Eq. 5.

The PGD problem resembles the standard finite element weak formulation. Due to the integration over a multidimensional space, it is much more complex and transformed into a nonlinear problem. The benefit of the PGD idea using a separated approach is the transformation of a multidimensional integral into the product of smaller dimensional integrals \(\int _{\Omega ^d} \varvec{u}^M(\varvec{x},X_1,X_2) \, d\Omega ^d = \sum \limits _{i=1}^M \left[ \int _{\varvec{x}} \varvec{F}_1^i(\varvec{x}) \, d\Omega ^x \, \int _{X_1} F_2^i(X_1) \, d\Omega ^{X_1} \, \int _{X_2} F_3^i(X_2) \,d\Omega ^{X_2} \right] \). For that reason, not only the displacement field but all input functions have to be approximated by an affine separated representation. For a detailed study of the influence of separated approximation of input data in the PGD approach, the reader is referred to Zlotnik et al. [58]. Here, this means that the load functions as well as the material law are also given in the form of

$$\begin{aligned}&\varvec{f}_V(\varvec{x},X_1,X_2,\dots , X_n) = \sum \limits _{i=1}^{V_f} \varvec{V}_1^i(\varvec{ x}) \, V_2^i(X_1) \, V_3^i(X_2) \dots V_{n+1}(X_n), \nonumber \\&\varvec{t}(\varvec{x},X_1,X_2,\dots , X_n) = \sum \limits _{i=1}^{T_t} \varvec{T}_1^i(\varvec{ x}) \, T_2^i(X_1) \, T_3^i(X_2) \dots T_{n+1}(X_n),\nonumber \\&\varvec{\sigma }(\varvec{x},X_1,X_2,\dots , X_n) = \sum \limits _{i=1}^{S_s} \varvec{S}_1^i(\varvec{ x}) \, S_2^i(X_1) \, S_3^i(X_2) \dots S_{n+1}(X_n). \end{aligned}$$
(6)

Usually, these separations can be derived analytically. For the examples in this paper, this is the case and the separations are shown later. If it is not possible to derive a decomposition analytically, approximations must be used. The smaller dimensional functions can e.g. be computed based on SVD or POD approaches [58].

There exist different ways to solve the problem in Eq. 5 in a discrete way [40]. The standard approach is the progressive PGD strategy [59], which is used in this paper as well. For a prove of the convergence of the progressive PGD for elliptic problems, the authors refer to Falco et al. [60]. The problem is solved iteratively

$$\begin{aligned} \varvec{u}^M= \sum \limits _{i=1}^{M-1} \prod \limits _{k=1}^{d} F_k^i(\kappa _k) + \prod \limits _{k=1}^{d} R_k(\kappa _k) = \varvec{u}^{M-1} + \prod \limits _{k=1}^{d} R_k(\kappa _k). \end{aligned}$$
(7)

The enrichment functions \(R_k\) for each enrichment step \(m \ge 1\) are computed in a fixed-point iteration. This solution approach is also known as alternative direction method. In each enrichment step m, d small dimensional problems—one for each PGD variable \(\kappa _k\)—are considered. These small dimensional problems are solved using the preferred discretization schema (FE) in a loop until the enrichment functions \(R_k\) are converged. Afterwards, the solution \(\varvec{u}^m\) is updated following Eq. 7 with \(M=m\). As a stopping criterion for the enrichment steps, the amplitude

$$\begin{aligned} A^m = \prod \limits _{k=1}^{d} ||F^m_k(\kappa _k)||_{L2} < \mathrm{tol} \end{aligned}$$
(8)

according to Zou et al. [61] is used. If the tolerance is reached the enrichment loop stops and the final solution is given with \(M=m\) summations. The implemented PGD algorithm is summarized in Fig. 1.

Fig. 1
figure 1

Progressive PGD algorithm

The accuracy of the proposed PGD reduction is influenced by different aspects. For an accurate representation, the problem must be separable. This also implies that the input of the problem must be separable. Otherwise, the accuracy of the PGD solution will be influenced by the approximation error on the input side. For this paper, we assume that this is always the case. Furthermore, the two main influences on the accuracy are the discretization error for each coordinate and the truncation error [58], i.e. stopping the summation in Eq. 3 after a finite number M. A converged PGD approach is needed in terms of a converged mesh for all PGD coordinates and converged number of modes to minimize these errors. Only if a sufficient quality of the reduced model is verified, it is possible to quantify the influence of the reduced model compared to the full FE model on the accuracy of the results. In the literature, a very fine discretization for all PGD extra coordinates is suggested (e.g. [62]). Nevertheless, it is generally not clear what exactly is fine and sufficient. A combination of PGD with an adaptive h-refinement on the space coordinate can be found in [63]. In this paper, a goal-oriented refinement procedure considering both influences are included in the proposed iterative PGD reliability method to reach an accurate estimation of the PGD model in the region of interest and, consequently, of the failure probability.

Adaptive importance sampling coupled with PGD

The main idea of importance sampling is an additional density generating samples directly in the region of interest. The key equations are reviewed following the notations in [4].

The integral (1) is rewritten as

$$\begin{aligned} P_f = \int \limits _{F} \frac{f_X(\varvec{V})}{h_V(\varvec{V})} \, h_V(\varvec{V}) \, d\varvec{V} = \int \limits _{\Omega } \frac{I(\varvec{V}) \ f_X(\varvec{V})}{h_V(\varvec{V})} \, h_V(\varvec{V}) \, d\varvec{V}, \end{aligned}$$
(9)

using the importance sampling density \(h_V\) and the indicator function \(I(\varvec{V}) = 1\) if \(g(\varvec{V}) \le 0\), otherwise \(I(\varvec{V}) = 0\). This new density function is used to generate samples directly in the region of interest—near the design point. The design point is the point on \(g=0\) with the minimum distance to the origin in the Gaussian normal space (\(f_X\) is a maximum). Non-Gaussian as well as correlated variables can be transformed into the Gaussian space based on e.g. Nataf [64] or the Rosenblatt transformation [65]. The probability of failure can then be estimated as

$$\begin{aligned} E(P_f) \approx \hat{P}_f = \frac{1}{N} \sum \limits _{k=1}^N I(\varvec{v}_{k}) \, \frac{f_X(\varvec{v}_{k})}{h_V(\varvec{v}_{k})} \end{aligned}$$
(10)

using N samples \(\varvec{v}_{k}\) generated by \(h_V\). The variance of the failure probability in Eq. 10 reads:

$$\begin{aligned} Var(\hat{P}_f) = \frac{1}{N-1} \left[ \frac{1}{N} \sum \limits _{k=1}^N I(\varvec{v}_{k}) \, \frac{{f_X(\varvec{v}_{k})}^2}{{h_V(\varvec{v}_{k})}^2} - \hat{P}_f^2 \right] \end{aligned}$$
(11)

In the following, the estimated failure probability is denoted as \(P_f\) instead of \(\hat{P}_f\) to simplify the notation.

A proper choice of \(h_V\) can significantly reduce the variance of the estimator \(P_f\). Various methods to define the importance sampling density exist, e.g. [4, 5, 15,16,17,18]. A common choice is the design point method, where the mean of the original density function \(f_{\varvec{X}}\) is shifted towards the design point while keeping the original covariance matrix. The design point can be computed e.g. by means of solving an optimization problem. Here, we propose to use an adaptive way based on the idea of subset simulation to estimate the design point and with it the importance sampling density.

In the subset simulation [12] approach, the problem is decomposed into subproblems with larger probabilities. In particular, the intermediate failure events are chosen as a decreasing sequence of p nested failure events

$$\begin{aligned} g_i = \{g(\varvec{X}) \le c_i \} \quad \text { with } c_1> c_2> \dots > c_p =0. \end{aligned}$$
(12)

The constants \(c_i\) shift the limit state (\(g_i=0\)) into regions of higher probabilities (see Fig. 2). It is possible to choose the constants \(c_i\) in an adaptive way using a threshold \(p_0\) for the conditional probabilities [13]. The probability of failure, \(P_f\), can then be expressed as a product of conditional probabilities \(P(g_{i+1}|g_i)\) and the probability of the first failure event \(P(g_1)\) [12]

$$\begin{aligned} P_f=P(g_m)=P \left( \bigcap \limits _{i=1}^m g_i \right) = P(g_1)\prod \limits _{i=1}^{m-1}P(g_{i+1}|g_i). \end{aligned}$$
(13)

In the literature, the conditional probabilities are usually estimated using conditional sampling by means of Markov Chain Monte Carlo simulation [12, 13, 66, 67]. The efficiency of such subset simulation approaches depends on the chosen intermediate failure events \(c_i\) and the threshold \(p_0\), respectively. This influences the number of subsets as well as the required number of samples for each subset. An extension of the subset simulation technique for dynamical systems is developed by [68] and extended for multiple limits states [69]. In the community of rare event probability, the subset simulation method is also known as sequential Monte Carlo [67] or splitting Monte Carlo method [66].

In contrast to these original subset simulation approaches based on conditional sampling algorithms, this idea of adaptively moving the limit state function is used to define the design point and \(h_V\). Furthermore, the sampling is coupled with a PGD surrogate defined in the last section. Of course, using a PGD model as a surrogate in a reliability analysis is not limited to this sampling method. It would speed up any other sampling procedure to compute the failure probability. The proposed PGD reliability method is summarized in Fig. 2.

Fig. 2
figure 2

PGD reliability analysis based on adaptive design point estimation ac and the estimation of \(P_f\) by means of importance sampling (d)

It starts with generating \(N_{p}\) samples by means of \(f_X\) (Fig. 2a) based on a Latin Hypercube (LHC) sampling. The random LHC design matrix with centered entries is generated based on [70].Footnote 2 From these first samples, the limits of the PGD extra coordinates, which are the random parameters \(X_i\) of the reliability analysis, can be defined. One option is to use the minimum and maximum of the sample points as the interval ranges for \(X_i\). In this case, an update of the PGD solution is needed for each step in the adaptive design point that is used to estimate \(h_X\) thus resulting in additional computational effort. Therefore, the intervals are enlarged using the range of a multiple of the standard deviation (\(\pm \, \xi \cdot s_{X_i}\)) around the median of each random parameter \(X_i\)\(i \in [1,n]\). The PGD problem is then solved for this enlarged parameter space. The PGD abacus is used to efficiently evaluate the limit state function \(g(\varvec{X})\) in the adaptive design point estimation (Fig. 2a–c). After each step, the interval ranges for the random parameters are verified and—if necessary—the PGD problem is solved again for an adapted parameter space. In the first step (Fig. 2a), the constant \(c_1\) is set equal to the \(p_0\)-percentile of the ordered limit state function values \(g({v}_{k})\) evaluated at each sample point \({v}_{k}\) for \(k \in [1,N_{p}]\). For that choice \(P(g_1)=p_0\) is satisfied. \(p_0\) is a given threshold for the intermediate failure events based on the adaptive subset simulation approach. Typically, it is in the range of 0.1 [13]. Afterwards, the samples lying in the failure domain \(F_1\) are averaged and used as the new mean for the importance sampling density in the next step. In the subsequent steps (Fig. 2b), the shifted limit state functions \(g_i\)\(i \in [2,p]\) are chosen so that

$$\begin{aligned} P(g_{i})= p_0 \cdot P(g_{i-1}) = p_0^{i}. \end{aligned}$$
(14)

Based on the assumption of nested limit state functions, the statement in Eq. 14 resembles the condition in the adaptive subset simulation in [13]. Using the importance sampling ansatz with \({h_X}_{g_i}\), these probabilities can be estimate as

$$\begin{aligned} P(g_i) \approx \frac{1}{N_p} \sum \limits _{k=1}^{N_p} I_{g_i}(\varvec{v}_k) \; \frac{f_X(\varvec{v}_k)}{{h_V}_{g_i}(\varvec{v}_k)} = \frac{1}{N_p} \sum \limits _{k=1}^{N_p} I_{g_i}(\varvec{v}_k) \; \omega _{g_i}(\varvec{v}_k). \end{aligned}$$
(15)

As mentioned above, the mean of \(h_{X_{g_i}}\) is defined as the average of the sample points lying in the previous failure domain \(F_{i-1}\). The \(p_0\)-percentile cannot be used to define the next limit state function. Instead, the samples and weights are ordered in increasing order of their magnitude of the limit state function \(g_i\). The weights are summed up until \(p_0\) is reached. \(c_i\) is than found as the value of the limit state function \(g_i\) for the last sample point included in the sum. This procedure is repeated until \(c_i\) becomes negative. At this point the original failure event \(F_p\) with \(c_p=0\) is reached. At that last step (Fig. 2c), an estimation of the design point is computed by averaging samples around \(g=0\). Therefore, \(10 \%\) of \(N_p\) samples lying closest to the failure surface and located in the failure as well as in the safe domain are averaged.

Finally, the probability of failure \(P_f\) is computed using a standard importance sampling based on Eq. 10 with the sampling density \(h_X\) shifted towards the adaptively defined design point approximation (Fig. 2d). The number of samples N in the final estimation of the failure probability is generally chosen significantly larger than the number of samples in the preceding adaptive steps to estimate the design point, thus \(N>N_p\).

The derivation of the proposed adaptive importance sampling method has been shown for problems with a single design point. For multiple design points, extensions such as proposed in [16] can be implemented. If the geometry of the limit state gets to complex, further extensions must be considered guaranteeing that the global optimum will be reached [71].

Fig. 3
figure 3

Iterative PGD reliability (itPGDrel) procedure based on itPGDrel-DP defined in Fig. 2a–c, itPGDrel-ref step detailed defined in Fig. 4 and itPGDrel-PF shown in Fig. 2

Iterative PGD reliability procedure (itPGDrel)

As discussed in the preceding sections, the sampling algorithm as well as the PGD model both influence the accuracy of the presented PGD reliability method. The mesh discretization and the mode truncation influence the accuracy of the PGD model used to evaluate the limit state function. The number of samples as well as the chosen threshold for the adaptive design point estimation directly influences the accuracy of the estimate for the failure probability. For that reason, an iterative PGD reliability procedure (itPGDrel) summarized in Fig. 3 is proposed. It is based on a goal-oriented refinement of the PGD model around the adaptively defined design point describing the important region. In this way, the accuracy of the PGD model is verified at least at an approximated design point. The difficulty is that the design point is not known a-priori, so that the iterative PGD model refinement must be embedded into the adaptive importance sampling procedure.

Fig. 4
figure 4

PGD refinement (itPGDrel-ref): iterative refinement of the PGD solution (discretization and mode truncation) at the current \(\varvec{\mu }_{h_{it}}\)

The itPGDrel process starts with a coarse discretization for each coordinate \(\kappa _i\) for the initial PGD model. Furthermore, the adaptive parameters \(N_p\) and \(p_0\) are chosen. In the numerical example section, a general discussion on how to choose these parameters is included. With this initial parameters and discretization, an initial (\(it=0\)) design point approximation is calculated using the above presented adaptive PGD design point estimation defined in Fig. 2a–c and abbreviated by itPGDrel-DP in the following. After several runs of that itPGDrel-DP an average approximation of the design point \(\varvec{\mu }_{h_{0}}\) is used to improve the PGD model in the PGD refinement step called itPGDrel-ref. The stochastic distribution is due to the different random seeds in the sampling procedure. In order to obtain an accurate estimate of the failure probability, the quality of the PGD model close to the design point is most important. For that reason, the criterion of the refinement step itPGDrel-ref is based on the limit state function value at \(\varvec{\mu }_{h_{it}}\): \(g(\varvec{\mu }_{h_{it}})\). The itPGDrel-ref process is summarized in Fig. 4 and will be explained in detail, later. With the refined PGD model, again a new approximation (\(it=1\)) of the design point \(\varvec{\mu }_{h_{1}}\) is computed repeating the adaptive PGD design point estimation itPGDrel-DP step starting now from \(\varvec{\mu }_{h_{0}}\). Since the intervals of the PGD coordinates of the refined PGD model are defined around the design point approximation of \(it-1\) (\(\varvec{\mu }_{h_0}\)), the mean of the random parameters might not generally be included. Therefore, the adaptive sampling in itPGDrel-DP starts with an importance sampling density centered in \(\varvec{\mu }_{h_{it-1}}\) instead of the mean of the random parameters as shown in Fig. 2a. The new design point approximation \(\varvec{\mu }_{h_{it}}\) is compared to the previous design point \(\varvec{\mu }_{h_{it-1}}\). If the design point approximation has not changed significantly, i.e. the difference in the distance \(\beta \) of the design point in the standard normal space \(\varvec{\mu }^Y_{h_{\cdot }}\) from the origin is smaller than a given tolerance: \(\Delta \beta < tol_{\beta }\), the failure probability is computed in step itPGDrel-PF using the PGD reliability method according to Fig. 2 starting from the current design point \(\varvec{\mu }_{h_{it}}\). Otherwise, a new itPGD-ref step is performed followed by a new itPGDrel-DP step. In the final PGD reliability step itPGDrel-PF, a convergence study for the required number of samples N in the importance sampling part (see Fig. 2d) is needed to estimate the failure probability with the given accuracy \(tol_p\). Therefore, in the importance sampling part of the itPGDrel-PF (Fig. 2d), the number of samples N is successively increased until the estimator of the variance of \(P_f\) is smaller than \(tol_p\).

The PGD refinement process itPGDrel-ref in this itPGDrel procedure is subdivided into three steps. In a first step, the mesh in space \(\varvec{x}\) is considered. A converged \(\varvec{x}\) mesh is derived by running an adaptive h-refinement on the standard FE model with the random parameters set to the current design point \(\varvec{\mu }_{h_{it}}\). In this case, standard refinement solvers can be used. In the present paper, an automated goal-oriented error control algorithm based on the framework of an auxiliary linearized adjoined (dual) problem provided in [72] is used. The limit state function acts as the goal function for the refinement algorithm. In the second step, a PGD model with the converged \(\varvec{x}\) mesh and coarse meshes in the directions of the random parameters is set up. The meshes concerning the random parameters (the PGD extra coordinates) are then refined successively until the changes in the limit state function value \(g(\varvec{\mu }_{h_{it}})\) is smaller than a given tolerance value. Here, a uniform refinement is used. After the first run (all coordinates were refined successively one time), the successively refinement starts again using the meshes of the first run as start meshes. In this way it is an iterative successively uniform refinement. In each refinement step, the PGD solution is generated from scratch. As an alternative, it would be possible to use a projection of the coarse PGD modes onto the fine mesh as the initial guess for the next refinement step. If converged meshes in all directions are found, the required number of modes M needs to be determined, in the last step. Therefore, the relative error of \(g(\varvec{\mu }_{h_{it}})\)

$$\begin{aligned} \epsilon = \frac{|g_{PGD}(\varvec{\mu }_{h_{it}}) - g_{FEM}(\varvec{\mu }_{h_{it}})|}{|g_{FEM}(\varvec{\mu }_{h_{it}})|} \end{aligned}$$
(16)

is used as selection criteria. In the latter expression, \(g_{FEM}\) denotes the limit state function based on the FE model with the current discretization and \(g_{PGD}\) the one based on the PGD model with the current discretization and a certain number of modes. The number of modes M in the PGD model is successively increased until the relative error is smaller as a given sought value.

Numerical results and discussion

In the following examples, the presented methodsFootnote 3 are discussed based on a one- and a two-dimensional structural reliability problem. First, the influence of the adaptive sampling parameters (\(p_0\) and \(N_p\)) is investigated for a simple uniaxial truss using the PGD reliability algorithm (Fig. 2) with an analytic limit state function. Furthermore, the proposed itPGDrel analysis is demonstrated and the influence of the selection criteria Eq. 16 of the itPGDrel-ref step onto the failure probability is investigated. Second, a more complex two-dimensional cross-section of a roadway bridge is used to apply the itPGDrel analysis to a more realistic structural problem. The convergence of the failure probability influenced by the model error of the PGD model against a FE model and the number of samples is shown. Furthermore, the savings in computational time using the proposed method are discussed. All results (estimator as well as its variance) of stochastic computations (itPGDrel-DP and itPGDrel-PF) are computed from 50 runs.

PGD reliability analysis of an linear elastic uniaxial truss

At first, the proposed PGD reliability analysis is applied to a simple structural problem. The uniaxial truss in Fig. 5 is clamped on both sides and loaded with a line load \(n(x,\lambda ,\Phi )=\lambda \cdot [\cos (x+\Phi ) + \sinh (x+\Phi )]\) along x\(\in [0,L]\). That artificial load function is chosen to generate a problem which needs a small number of modes in the PGD solution. Linear elastic material is assumed \(\sigma = E \cdot \varepsilon \) with the Young’s modulus E. The cross-section and the length of the truss are given as deterministic values as \(A=1.0\) and \(L=1.0\). The load parameters \(\lambda \) and \(\Phi \) as well as the Young’s modulus E are investigated as random parameters with normal distributions given in Table 1. The limit state is defined by the restriction of the maximum displacement located at \(x_{umax}\):

$$\begin{aligned} g(\Phi ,\lambda ,E) = u_{limit} - \left| u(x_{umax},\Phi ,\lambda ,E)\right| . \end{aligned}$$
(17)

\(u_{limit}\) is chosen as 0.33, which results in a failure probability in the order of \(10^{-6}\). In this example, \(x_{umax}=0.52\, L\) is chosen.Footnote 4

In total, three random parameters are considered so that the PGD approximation of the displacement function is given by

$$\begin{aligned} u(x,\Phi ,\lambda ,E) \approx \sum \limits _{i=1}^M \;F_1^i(x)\,F_2^i(\Phi )\,F_3^i(\lambda )\,F_4^i(E). \end{aligned}$$
(18)

The input functions (load and material law) can be analytically separated in an affine way. The load function needs four function products

$$\begin{aligned} p(x,\Phi ,\lambda ,E)= & {} \cos (x) \cdot \cos (\Phi ) \cdot \lambda \cdot 1.0 + (-\sin (x)) \cdot \sin (\Phi ) \cdot \lambda \cdot 1.0 \nonumber \\&+ \sinh (x) \cdot \cosh (\Phi ) \ \lambda \cdot 1.0 + \cosh (x) \cdot \sinh (\Phi ) \cdot \lambda \cdot 1.0 \nonumber \\= & {} \sum \limits _{i=1}^4 \;V_1^i(x)\,V_2^i(\Phi )\,V_3^i(\lambda )\,V_4^i(E). \end{aligned}$$
(19)

The material law is given in separated form as

$$\begin{aligned} \sigma (x,\Phi ,\lambda ,E) = \sum \limits _{i=1}^M \; \nabla F_1^i(x) \cdot F_2^i(\Phi )\cdot F_3^i(\lambda ) \cdot E \ F_4^i(E). \end{aligned}$$
(20)

In this first example, the displacement field can be solved analytically. Using this analytical function for u, the limit state function Eq. 17 is given as

$$\begin{aligned} g(\Phi ,\lambda ,E)= & {} u_{limit} - |u(x_{umax},\Phi ,\lambda ,E)| \nonumber \\ u(x_{umax},\Phi ,\lambda ,E)= & {} \frac{\lambda }{E\ A} \; \left[ \cos (x_{umax}+\Phi ) - \sinh (x_{umax} +\Phi ) \right. \nonumber \\&+ \frac{x_{umax}}{L} \cdot \left( -\cos (L+\Phi ) + \cos (\Phi ) + \sinh (L+\Phi ) - \sinh (\Phi ) \right) \nonumber \\&- \cos (\Phi ) + \sinh (\Phi )]. \end{aligned}$$
(21)

The latter limit state function based on the analytical solution is used as reference for the PDG model in the following investigations.

Fig. 5
figure 5

Example 1D truss with geometry, boundaries and loading conditions with \(n(x,\Phi ,\lambda )\)\(=\)\(\lambda \cdot [\cos (x+\Phi ) + \sinh (x+\Phi )]\)

Table 1 Distributions of random parameters in example 1D truss
Table 2 1D truss: influence of the number of samples \(N_p\) with a fixed \(p_0 = 0.1\) in the adaptive design point estimation using the analytic limit state function (50 runs with \(N=5000\))

Influence of adaptive sampling parameters

In Tables 2 and 3, the influence of the adaptive sampling parameters \(p_0\) and \(N_p\) on the failure probability is summarized. The data are averages based on 50 runs of the presented adaptive importance sampling method (summarized in Fig. 2) using an analytic limit state function instead of a PGD surrogate with \(N=5000\) and varying adaptive parameters \(p_0\) and \(N_p\). For this discussion, the number of samples N in the last importance sampling step (Fig. 2d) is of minor importance but must be fixed for the estimation and comparison of \(P_f\). With \(N=5000\) samples \(P_f\) can be estimated very accurately with a coefficient of variation (CoV) smaller than 0.1. For comparison purposes, the failure probability is given with respect to a reference value \(P_{f_{ref}}\). That value is the average of 50 estimations with \(N_p=1000\), \(p_0=0.1\) and \(N=25,600\).

Table 3 1D truss: influence of the adaptive parameters \(N_p\) and \(p_0\) using the same relation (\(N_p \cdot p_0 = 10\)) in the adaptive design point estimation using the analytic limit state function (50 runs with \(N=5000\))

In the first comparison (Table 2), the threshold \(p_0\) is fixed to 0.1 and the number of samples for each step \(N_p\) is increased. This means that the accuracy of the adaptive steps is improved. The total number of samples (\(N_p\) per step plus fixed N) increases with increasing \(N_p\). But, the effect on the accuracy (relative \(P_f\) and \(std(P_f)\)) is not significant as long as \(N_p\) is sufficient to approximate probabilities in the range of \(p_0\) (here \(N_p>10\)). If the number of runs is increased an unbiased estimation should be reached. In that case the average of the failure probability will not change anymore. In Table 3, the product \(N_p \cdot p_0\) is fixed to 10. In that case, exactly 10 samples fall into the corresponding failure domain. It is demonstrated that the adaptive sampling parameters \(p_0\) and \(N_p\) do not significantly influence the accuracy of the failure probability. Furthermore, the average over the 50 runs of the variance estimator values (column “\(\overline{\sqrt{(}Var(P_f)}\)” of Tables 2 and 3) are in the same range as the standard deviation of the failure probabilities computed with 50 runs (column “\(std(P_f)\)”). For that reason, the study verifies the variance estimator Eq. 11 for the adaptive importance sampling algorithm.

Furthermore, a crude Monte Carlo simulation of the example with \(10^7\) samples yields an average failure probability of \(1.198 \, 10^{-6}\) (50 runs) with a coefficient of variation (CoV) of 0.33. Compared to the results in the tables, the presented adaptive importance sampling method can provide the same accuracy with significantly less sample evaluations.

From this study, we propose the adaptive sampling parameters to be chosen in the way that \(N_p \cdot p_0 > 10\). Therefore, in the following examples \(N_p=100\) and \(p_0=0.1\) is used without further discussion.

Iterative PGD reliability analysis

In the next step, the iterative PGD reliability analysis (itPGDrel) procedure is applied to the example. As starting conditions \(N_p=100\), \(p_0=0.1\) and 2 linear elements in each direction are chosen. The PGD refinement step itPGDrel-ref for the first design point approximation \(\varvec{\mu }_{h_0}\) calculated in 50 runs of the adaptive PGD design point estimation itPGDrel-DP is demonstrated in Figs. 6 and 7.

In Fig. 6, the adaptive h-refinement of the spatial mesh in x (step 1 of itPGDrel-ref) is shown. The relative error of the limit state function value at the design point approximation between the FE model and the analytic model is plotted against the number of elements in x. As expected, the relative error decreases with increasing number of elements. From this convergence study, an x-mesh with 361 linear elements is chosen. Using this mesh, the relative error to the analytic solution is about \(10^{-5}\).

Fig. 6
figure 6

1D truss: error of the FE solution (mesh in x) \(g_{FE}(\varvec{\mu }_{h_0})\) relative to the analytic value \(g_{ana}(\varvec{\mu }_{h_0})\) over the number of elements (itPGDrel-ref step 1)

Fig. 7
figure 7

1D truss: error of PGD limit state function at the first design point approximation \(\varvec{\mu }_{h_0}\) over the number of PGD modes M and different PGD mesh discretization (itPGDrel-ref step 2 and 3). The number of linear elements in \(\Phi \), \(\lambda \), E is given in the legend (in x 361 linear elements)

Fig. 8
figure 8

1D truss: first three PGD modes of the refined PGD solution at the first design point approximation \(\varvec{\mu }_{h_0}\) with \(\xi =10\)

In the next steps 2 and 3 of the PGD refinement itPGDrel-ref, the PGD model is refined. The interval ranges of the PGD extra coordinates \(\Phi \), \(\lambda \) and E are defined by the first design point approximation \(\varvec{\mu }_{h_{0}}\) and the standard deviations of the random parameters \(s_i\) and are chosen as \(\varvec{\mu }_{h_0} \pm 10 \cdot \varvec{s}\). In Fig. 7, the relative error Eq. 16 of the limit state function value \(g(\varvec{\mu }_{h_0})\) between a PGD and FE based limit state function is plotted against the number of PGD modes for different mesh sizes for \(\Phi \), \(\lambda \) and E. Linear elements are used for the extra coordinates. The refined meshes are determined by iteratively refining the PGD coordinates as explained in Fig. 4. As expected, the error decreases with increasing number of modes and increasing number of elements. Using more than three PGD modes, the changes in the error is neglectable. The second parameter \(\lambda \) is always discretized with 2 linear elements (the start value), because displacements depends linear on the load factor. From that first itPGDrel-ref performance, a sufficiently refined PGD model has to be chosen for the following adaptive PGD design point estimation step itPGDrel-DP with \(it=1\). This will be done based on the selection criterion Eq. 16, the relative error of the limit state function value at \(\varvec{\mu }_{h_0}\) computed with a PGD surrogate compared to the one computed with a FE model and shown in Fig. 7. Here, a refined PGD model with a relative error of \(10^{-4}\) is used which consists of 1024 linear elements in \(\Phi \), 2 in \(\lambda \) and 128 in E and three PGD modes. This discretization corresponds to a refinement tolerance of \(10^{-4}\). The first three modes of each PGD coordinate are plotted in Fig. 8.

In the next iteration step (it = 1), the refined PGD model is used in the PGD design point estimation step itPGDrel-DP and a new design point approximation \(\varvec{\mu }_{h_{1}}\) as average of 50 runs is computed. These two steps (itPGDrel-ref and itPGDrel-DP) are repeated until the new approximation of the design point is similar to the previous one: \(\Delta \beta < 0.1\). Afterwards, the failure probability can be estimated performing the PGD reliability step itPGDrel-PF.

Table 4 1D truss: summary of itPGDrel process is given by the averaged design point \(\varvec{\mu }_{h_{it}}\) in the normal space, the difference in the distance \(\Delta \beta \) as well as the failure probability with \(N=1600\) based on PGD and FE model (50 runs, \(p_0=0.1\), \(N_p=100\), \(\xi =10\))
Fig. 9
figure 9

1D truss: convergence of failure probability (left) and its standard deviation (right) over the number of samples N running itPGDrel-PF for \(it=1\) (50 runs, \(p_0=0.1\), \(N_p=100\), \(N_{max}=25,600\) and \(\xi =10\))

Fig. 10
figure 10

1D truss: influence of the refinement degree of the PGD model (given by Eq. 16) on the failure probability \(P_f\) based on 50 runs of itPGDrel-PF with varying PGD solutions regarding mesh discretization and number of modes (\(\varvec{\mu }_{h_0}\) from the initial iteration \(it=0\) and \(p_0=0.1\), \(N_p=100\) and \(N=1600\) are used)

In Table 4, the iterative process itPGDrel is summarized. The coordinates of the design point approximation transformed into the normal standard space as well as the resulting difference in the distance \(\beta \) (see Fig. 3) are given for each iteration step it. Additionally, the failure probability estimated with \(N=1600\) samples by means of the current PGD model and \(\varvec{\mu }_{h_{it}}\) as well as by means of the analytic limit state function are given for each iteration step for comparison purposes—usually the failure probability will only be calculated at the end for the converged iteration step (see Fig. 9). The CoV for that values with \(N=1600\) is 0.07. Already after two iterations, the difference in \(\beta \) is sufficiently small (using \(tol_{\beta }=0.1\)). The reason for the proposed refinement step can be seen by comparing the failure probabilities. The computed failure probability using the coarse initial PGD model shows a large deviation from the reference value computed with the analytic limit state function. After the first PGD refinement step around the first design point approximation, the computed failure probability is very close to the reference value (\(it = 1\)). A further refinement will not significantly improve the accuracy of the estimation for \(P_f\) (\(it=2\)).

In Fig. 9, the influence of the number of samples N in the PGD importance sampling of the PGD reliability step itPGDrel-PF is demonstrated. The failure probability as well as its standard deviation for the converged iteration step (\(it=1\)) as well as for the case using the analytic limits state function is plotted against the number of samples N. The estimation of \(P_f\) converges as expected according to Eq. 11 proportional to \(1/\sqrt{N}\) which is shown in the linear decrease of the standard deviation in logarithmic scale.

Summarizing, the presented iterative PGD reliability procedure itPGDrel can reduce the error influences of the PGD model. Based on convergence studies, an accurate estimate of the failure probability can be obtained.

Influence of the PGD model on failure probability

After verifying the itPGDrel method, the sensitivity of the degree of refinement of the PGD surrogate (performed in the itPGDrel-ref step) on the failure probability is discussed. The goal is to refine the PGD model only as much as required for an accurate estimation of the failure probability. For that purpose, the sensitivity of the selection criteria Eq. 16—that characterizes the accuracy of the PGD approximation—with respect to the computed failure probability in PGD reliability step itPGDrel-PF is investigated.

For the first iteration \(it=0\), the failure probability is computed (using itPGDrel-PF) for each PGD model in the refinement step itPGDrel-ref, thus illustrating the influence of the discretization error as well as the mode truncation error on \(P_f\).

In Fig. 10, these failure probabilities are plotted against the selection criteria Eq. 16 (the relative error of \(g(\varvec{\mu }_{h_{0}})\) between the PGD and FEM model) separately for the PGD models obtained during itPGDrel-ref step 2 (discretization: different uniform refinements of PGD extra coordinates) and step 3 (truncation: changing the number of PGD modes). In this way, the effect of the PGD model accuracy (measured at one point) on the failure probability is shown. The failure probability computed based on the analytic limit state function is given as reference. For a selection criterion in the range of \(10^{-3}\) and better, the reference failure probability can be obtained. Furthermore, the source of the model error (i.e. finer discretization or more PGD modes) is not important.

Summarizing the first numerical example, general rules for choosing the adaptive sampling parameters \(N_p\) and \(p_0\) are given. The itPGDrel procedure is verified and the importance of including a model refinement step itPGDrel-ref in an adaptive way is demonstrated. The iterative method converged very fast. In this example one itPGDrel-ref step is sufficient to find a PGD surrogate for estimating the failure probability efficiently and sufficiently accurate. Furthermore, it is shown that a refined PGD model can be obtained by a selection criterion measuring the relative error of the limit state function computed using the PGD model compared to a FE model at the design point. Failure probabilities within the same accuracy as using an analytic limit state function can be generated by refined PGD models with a selection criterion less than \(10^{-3}\). It was demonstrated that the required accuracy of the PGD surrogate is directly related to the desired accuracy of the estimated failure probability.

Fig. 11
figure 11

T-beam cross-section with geometry in (m), loading and boundary conditions

PGD reliability analysis of a roadway cross-section

As a second structural example, a 2D roadway cross-section of a T-beam bridge is investigated. The geometry, boundary and loading conditions are shown in Fig. 11. The traffic is modelled by means of constant values over each lane of the three-lane roadway. The structure behaves linear elastic using a plane strain assumption. The mean value of the Young’s modulus is defined as \(E=\lambda _E \cdot E_0\). Other deterministic as well as random parameters are given in Table 5. The limit state is defined by the restriction of the deflection (\(u_y\)) at the right end of the cross-section (point P at \((6.45,-0.1)\) (m))

$$\begin{aligned} g(\lambda ,\lambda _E,\nu ) = u_{limit} - |u_y(\varvec{x}_P,\lambda ,\lambda _E,\nu )|. \end{aligned}$$
(22)

\(u_{limit}\) is chosen as 80 mm, which yields a failure probability around \(10^{-5}\).

Three random parameters are considered, thus the PGD approximation of the displacement field is given by the product of four PGD modes:

$$\begin{aligned} \varvec{u}(\varvec{x},\lambda ,\lambda _E,\nu ) \approx \sum \limits _{i=1}^M \;\varvec{F}_1^i(\varvec{x})\,F_2^i(\lambda )\,F_3^i(\lambda _E)\,F_4^i(\nu ). \end{aligned}$$
(23)

The product of the approximation consists of a single two-dimensional function depending on the spatial coordinates and three one-dimensional functions depending on the random variables. The corresponding separated representations for the external load as well as the material law can be derived analytically. The separated functions for the loads read

$$\begin{aligned}&\varvec{q_1}(\varvec{x},\lambda ,\lambda _E,\nu ) = \varvec{G_1}(\varvec{x})\cdot \lambda \cdot 1.0 \cdot 1.0 = \varvec{G_1}(\varvec{x})\, G_2(\lambda )\, G_3(\lambda _E)\,G_4(\nu ) \nonumber \\&\varvec{q_2}(\varvec{x},\lambda ,\lambda _E,\nu ) = \varvec{H_1}(\varvec{x}) \cdot \lambda \cdot 1.0 \cdot 1.0 = \varvec{H_1}(\varvec{x})\,H_2(\lambda )\, H_3(\lambda _E) \, H_4(\nu ). \end{aligned}$$
(24)

\(\varvec{G_1}(\varvec{x})\) and \(\varvec{H_1}(\varvec{x})\) are given as piecewise constant functions (\(q_1\) and \(q_2\) in y-direction see Fig. 11). The linear elasticity matrix for plane strain in two dimensions can be decomposed into the following parts in Voigt notation

$$\begin{aligned} \varvec{C}= & {} \frac{1}{2\ (1-\nu )} \cdot \lambda _E \ E_0 \cdot 1.0 \, \left[ \begin{matrix} 1 &{} 1 \\ 1 &{} 1 \end{matrix} \right] + \frac{1}{2\ (1+\nu )} \cdot \lambda _E \ E_0 \cdot 1.0 \, \left[ \begin{matrix} 1 &{} -1 \\ -1 &{} 1 \end{matrix}\right] \nonumber \\= & {} \sum \limits _{i=1}^2 \varvec{C}^i_1(\varvec{x}) \, C^i_2(\lambda ) \, C^i_3(\lambda _E) \, C^i_4(\nu ), \end{aligned}$$
(25)

depending on the chosen PGD coordinates \(\varvec{x}\), \(\lambda \), \(\lambda _E\) and \(\nu \).

Table 5 Definition of deterministic and random parameters of the 2D cross-section example

Iterative PGD reliability analysis

The iterative PGD reliability analysis itPGDrel is applied to the bridge cross-section example. The results are summarized in Table 6 and Fig. 12. As starting conditions \(N_p=100\), \(p_0=0.1\), 404 quadratic elements in the space dimension \(\varvec{x}\) and 2 linear elements in the directions \(\lambda \), \(\lambda _E\) and \(\nu \) are chosen. The interval ranges for the PGD extra coordinates are set to \(\xi =10\) around the mean of the random parameters. The refinement after the initial PGD design point estimation step itPGDrel-DP (\(it=0\)\(\rightarrow \)\(\varvec{\mu }_{h_0}\)) is done with a tolerance of \(10^{-4}\) for both refinement steps (itPGDrel-ref step 1 and step 2). The goal function of the adaptive h-refinement of the step 1 of itPGDrel-ref to achieve a converged mesh in \(\varvec{x}\) is defined by the limit state function. Due to the implementation of the refinement algorithm in FEniCS [72], the integral over the displacement \(u_y\) located at the edge of point P (see Fig. 11) divided by the length of the edge is used. In this example, that average displacement at the edge is very close to the displacement at point P.

In Table 6, the changes in the design point approximation transformed into the normal space, as well as the differences of the distance \(\beta \) and the failure probability (with \(N=1600\)) over the iteration steps are shown. As reference, the solution of \(P_f\) with \(N=1600\) based on a FE limit state function is given. It is demonstrated that the algorithm again converges in only two steps (with \(tol_{\beta }=0.1\)) and the resulting \(P_f\) by means of the PGD model is very accurate compared to the full FE model.

In Fig. 12, the convergence over the number of samples N in the itPGDrel-PF step is illustrated. The plots show the failure probability as well as its standard deviation over the number of samples N for the converged iteration step 1. With 1600 samples, the failure probability can be computed with a CoV of 0.05. Increasing the number of samples by a factor of four halves the CoV, as expected. The refined PGD model of iteration step 1 is discretized with 12, 122 quadratic elements in \(\varvec{x}\) and 2 linear elements in \(\lambda \), 1024 in \(\lambda _E\) and 128 in \(\nu \). Three PGD modes are used for each coordinate. The selection criteria Eq. 16 of that refined PGD model that describes the relative error of the PGD solution compared to the FEM at the design point is \(2\cdot 10^{-5}\) and thus already very small.

Finally, the influence of the refinement degree of the PGD model chosen in the itPGDrel-ref step on the failure probability computed in the PGD reliability step itPGDrel-PF is investigated. Again, the intermediate PGD models obtained during the refinement step itPGDrel-ref\(it=0\) are used to estimate the failure probability. The failure probability over the value of the selection criteria Eq. 16 of each of these PGD models is given in Fig. 13. The failure probability computed with a PGD surrogate with a relative error less than \(10^{-3}\) in the design point approximation fits very good the reference values generated by means of a FE model. As already seen in the first simple example, there is no need of a perfectly fit between the PGD surrogate and its reference model to compute an accurate estimate of the failure probability.

Fig. 12
figure 12

Cross-section: convergence of failure probability (left) and its standard deviation (right) over the number of samples N running itPGDrel-PF for \(it=1\) (50 runs, \(p_0=0.1\), \(N_p=100\), \(N_{max}=102,400\) and \(\xi =10\))

Fig. 13
figure 13

Cross-section: influence of the refinement degree of the PGD model (given by Eq. 16) on the failure probability \(P_f\) based on 50 runs of itPGDrel-PF with varying PGD solutions regarding mesh discretization and number of models (\(\varvec{\mu }_{h_0}\) from the initial iteration \(it=0\) and \(p_0=0.1\), \(N_p=100\), \(N=1600\) are used)

Table 6 Cross-section: summary of itPGDrel process is given by the averaged design point \(\varvec{\mu }_{h_{it}}\) in the normal space as well as the failure probability ranges with \(N=1600\) based on PGD and FE model (50 runs, \(p_0=0.1\), \(N_p=100\) and \(\xi =10\))

Computational time

The main advantage of using a PGD surrogate in a reliability analysis is the saving of computational cost for each model evaluation at the samples of the random variables. In the standard case, a full FE model must be computed in each sample for the evaluation of the limit state function. In the cross-section example, this means the computation of a 2D structural problem with varying parameters. Using a PGD model, only the computation of the PGD solution for a given parameter space is computationally expensive. In the fixed point iteration to compute this solution, several 2D simulations with a computational cost comparable to the sample evaluations by means of the standard FE based analysis are needed to generate the M\(\varvec{F}_1^i(\varvec{x})\) PGD modes. Afterwards in each sample, the model evaluation is done by only evaluating the PGD modes at specific positions. It is a pure function evaluation without solving any equation system. The key point is that for the presented PGD coupled method the evaluation at each sample used to estimate \(P_f\) is computationally very fast and has only a minor influence on the total computational effort.

For that reason, the number of required solving’s of a 2D equation system are counted in each part of the proposed itPGDrel procedure in Table 7—either using a PGD surrogate or a FE model. The 2D systems do not have the same size, because of the iterative nature of itPGDrel including a discretization refinement. Additionally, the total number of samples including all adaptive samples \(N_p\) as well as the samples in the last importance sampling step N is given. In case of a itPGDrel with a FE model, the number of 2D system solutions is equal to the total number of samples, because at each sample point a new FE simulation is required. Furthermore, a small number of simulations are needed in the adaptive h-refinement of the mesh (\(\varvec{x}\)). In contrast, using a PGD model requires much less 2D system solutions compared to the total number of samples. In this example, the computation of the initial PGD model requires 25 times the solution of the 2D system to generate the M\(\varvec{F}_1^i(\varvec{x})\) modes. After that, the 500 samples in the first itPGDrel-DP step can be computed without solving any equation systems based on the efficient evaluation of the PGD solution. In the further itPGDrel-DP steps as well as in the final itPGDrel-PF step, no system of equations must be solved either, because the current PGD model can be used. Therefore, the computational cost is almost negligible and performing several runs to compute averages and standard deviations can be done very efficiently. However, for the PGD refinement procedure itPGDrel-ref, significantly more 2D system solutions are needed compared to the approach based on a FE model. This is because in case of PGD 3 additional meshes must be successively refined. So, several PGD problems must be computed, where each includes around 25 times the solution of the 2D system to generate the M\(\varvec{F}_1^i(\varvec{x})\) modes. Of course, this depends on the initial meshes of the PGD extra coordinates as well as the refinement factor. Starting with a fine discretization for these extra coordinates decreases the required number of 2D solutions significantly.

In the current implementation, using the PGD surrogate is around 100 times faster than using the FE model. In this way, instead of around 20 min, the final itPGDrel-PF step with a fixed \(N=1600\) samples is done in a few seconds using a PGD model. Of course, the speed-ups are problem dependent and increase with the complexity of the underlying structure. For example, in the first itPGDrel-DP step—where a very coarse mesh is used—the speed-up of using the PGD surrogate instead of the FE model is only around 5.

Table 7 Cross-section: comparison of the number of required solutions of the 2D equation system to compute \(\varvec{u}\) (FE) or \(\varvec{F}_1\) (PGD) in the itPGDrel analysis (\(N_{total}\) total number of samples)

Conclusion

In the present paper, an iterative approach for an efficient reliability analysis for structures is introduced. The developed method paves the way for using full probabilistic approaches in industrial applications. The first idea is to combine adaptive importance sampling to minimize the required number of samples with a PGD reduced model as surrogate of the structure which saves computational effort at each sample evaluation. The second idea is the integration of a goal-oriented model refinement in the reliability analysis to control the error of the failure probability caused by model errors in an automatic manner. Therefore, the PGD model of the limit state function is iteratively refined around the adaptively approximated design point based on the error to a FE limit state function at this point.

The developed method was validated for a uniaxial truss structure as well as a cross-section of a roadway bridge. In both cases, an accurate estimation of the failure probability could be obtained in a few iteration steps. The computed failure probability was verified by an estimation with an analytical limit state function, in the first case, and with an FE based limit state function, in the second case. A speed-up of around 100 was reached in the second example compared to the estimation with a FE model. The importance of the PGD refinement step is shown by studying the influence of the different refined PGD model on the failure probability. In both examples, a refined PGD model chosen with a relative error of the PGD limit state function value at the approximated design point compared to the FE limit state function value less \(10^{-3}\) yields failure probability within the accuracy of the reference values. The validation of the approach to general nonlinear structural problems will be addressed in future work of the authors.

With the proposed itPGDrel analysis, it is possible to estimate the failure probability in a very efficient way compared to standard approaches. Based on its iterative nature, it is possible to control and differentiate the error sources of the failure probability. The influence of model errors is considered by an iterative goal-oriented refinement of the PGD surrogate in the region of interest.

Data Availability Statement

Not applicable.

Notes

  1. Eurocode—basis of structural design, European standard EN 1990:2002.

  2. Module diversipy. Sample in hypercube, select diverse subsets and measure diversity, version 0.8. Available at https://pypi.org/project/diversipy/.

  3. The method is implemented in python “Python Software Foundation. Python Language Reference, version 3.6. Available at http://www.python.org” using the standard python packages such like NumPy, SciPy \(\dots \) and the open-source tool FEniCs “FEniCs Project version 2018.1. Available at https://fenicsproject.org/” [73] as solver for the variational problems.

  4. The location of the maximum displacement value has been derived numerically at the first design point approximation in the itPGDrel procedure. It is here fixed at the beginning to derive a simple analytic expression for the limit state function.

References

  1. Bucher C. Asymptotic sampling for high-dimensional reliability analysis. Probabilistic Eng Mech. 2009;24(4):504–10. https://doi.org/10.1016/j.probengmech.2009.03.002.

    Article  Google Scholar 

  2. Rackwitz R. Reliability analysis: a review and some perspectives. Struct Saf. 2001;23(4):365–95. https://doi.org/10.1016/S0167-4730(02)00009-7.

    Article  Google Scholar 

  3. Bucher C, Most T. A comparison of approximate response functions in structural reliability analysis. Probabilistic Eng Mech. 2008;23(2–3):154–163. In: 5th international conference on computational stochastic mechanics. https://doi.org/10.1016/j.probengmech.2007.12.022.

  4. Engelund S, Rackwitz R. A benchmark study on importance sampling techniques in structural reliability. Struct Saf. 1993;12(4):255–76. https://doi.org/10.1016/0167-4730(93)90056-7.

    Article  Google Scholar 

  5. Dubourg V, Sudret B, Deheeger F. Metamodel-based importance sampling for structural reliability analysis. Probabilistic Eng Mech. 2013;33:47–57. https://doi.org/10.1016/j.probengmech.2013.02.002.

    Article  Google Scholar 

  6. Schöbi R, Sudret B. Structural reliability analysis for p-boxes using multi-level meta-models. Probabilistic Eng Mech. 2017;48:27–38. https://doi.org/10.1016/j.probengmech.2017.04.001.

    Article  MATH  Google Scholar 

  7. Moarefzadeh MR, Melchers RE. Directional importance sampling for ill-proportioned spaces. Struct Saf. 1999;21(1):1–22. https://doi.org/10.1016/S0167-4730(98)00023-X.

    Article  Google Scholar 

  8. Nie J, Ellingwood BR. Directional methods for structural reliability analysis. Struct Saf. 2000;22(3):233–49. https://doi.org/10.1016/S0167-4730(00)00014-X.

    Article  Google Scholar 

  9. Grooteman F. An adaptive directional importance sampling method for structural reliability. Probabilistic Eng Mech. 2011;26(2):134–41. https://doi.org/10.1016/j.probengmech.2010.11.002.

    Article  Google Scholar 

  10. Sichani MT, Nielsen SRK, Bucher C. Applications of asymptotic sampling on high dimensional structural dynamic problems. Struct Saf. 2011;33(4?5):305–16. https://doi.org/10.1016/j.strusafe.2011.05.002.

    Article  Google Scholar 

  11. Bucher C. Asymptotic sampling—a tool for efficient reliability computation in high dimensions. PAMM. 2015;15(1):549–50. https://doi.org/10.1002/pamm.201510265.

    Article  Google Scholar 

  12. Au S-K, Beck JL. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Eng Mech. 2001;16(4):263–77. https://doi.org/10.1016/S0266-8920(01)00019-4.

    Article  Google Scholar 

  13. Papaioannou I, Betz W, Zwirglmaier K, Straub D. MCMC algorithms for subset simulation. Probabilistic Eng Mech. 2015;41:89–103. https://doi.org/10.1016/j.probengmech.2015.06.006.

    Article  Google Scholar 

  14. Alvarez DA, Uribe F, Hurtado JE. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory. Mech Syst Signal Process. 2018;100:782–801. https://doi.org/10.1016/j.ymssp.2017.07.040.

    Article  Google Scholar 

  15. Melchers RE. Search-based importance sampling. Struct Saf. 1990;9(2):117–28. https://doi.org/10.1016/0167-4730(90)90003-8.

    Article  MathSciNet  Google Scholar 

  16. Au SK, Beck JL. A new adaptive importance sampling scheme for reliability calculations. Struct Saf. 1999;21(2):135–58. https://doi.org/10.1016/S0167-4730(99)00014-4.

    Article  Google Scholar 

  17. Bucher CG. Adaptive sampling—an iterative fast monte carlo procedure. Struct Saf. 1988;5(2):119–26. https://doi.org/10.1016/0167-4730(88)90020-3.

    Article  Google Scholar 

  18. Au SK, Beck JL. Important sampling in high dimensions. Struct Saf. 2003;25(2):139–63. https://doi.org/10.1016/S0167-4730(02)00047-4.

    Article  Google Scholar 

  19. Katafygiotis L, Cheung SH, Yuen K-V. Sperical subset simulation (\(s^3\)) for solving non-linear dynamical reliability problems. Intern J Reliab Saf. 2010;4(2/3):122–38.

    Article  Google Scholar 

  20. Pendola M, Mohamed A, Lemaire M, Hornet P. Combination of finite element and reliability methods in nonlinear fracture mechanics. Reliab Eng Syst Saf. 2000;70(1):15–27. https://doi.org/10.1016/S0951-8320(00)00043-0.

    Article  Google Scholar 

  21. Gomes HM, Awruch AM. Comparison of response surface and neural network with other methods for structural reliability analysis. Struct Saf. 2004;26(1):49–67. https://doi.org/10.1016/S0167-4730(03)00022-5.

    Article  Google Scholar 

  22. Chojaczyk AA, Teixeira AP, Neves LC, Cardoso JB, Soares CG. Review and application of artificial neural networks models in reliability analysis of steel structures. Struct Saf. 2015;52(Part A):78–89. https://doi.org/10.1016/j.strusafe.2014.09.002.

    Article  Google Scholar 

  23. Yun W, Lu Z, Jiang X. An efficient reliability analysis method combining adaptive kriging and modified importance sampling for small failure probability. Struct Multidiscipl Optimiz. 2018;58(4):1383–93. https://doi.org/10.1007/s00158-018-1975-6.

    Article  MathSciNet  Google Scholar 

  24. Novak L, Novak D. Polynomial chaos expansion for surrogate modelling: theory and software. Beton- und Stahlbetonbau. 2018;113(S2):27–32. https://doi.org/10.1002/best.201800048.

    Article  Google Scholar 

  25. Hurtado JE. An examination of methods for approximating implicit limit state functions from the viewpoint of statistical learning theory. Struct Saf. 2004;26(3):271–93. https://doi.org/10.1016/j.strusafe.2003.05.002.

    Article  Google Scholar 

  26. Patera AT, Rozza G. Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations, Version 1.0. to appear in (tentative rubric) MIT Pappalarado graduate monographs in mechanical engineering, Copyright MIT; 2006. https://www.uni-ulm.de/fileadmin/website_uni_ulm/mawi.inst.070/ss12/NumPDE2/Literatur/Patera_Rozza_-_RB_Approximation_and_a_posteriori_error_estimation_for_PPDE.pdf.

  27. Grepl MA, Patera AT. A posteriori error bounds for reduced-basis approximations of parametrized parabolic partial differential equations. ESAIM Math Model Numer Anal. 2005;39:157–81. https://doi.org/10.1051/m2an:2005006.

    Article  MATH  Google Scholar 

  28. Huynh DBP, Patera AT. Reduced basis approximation and a posteriori error estimation for stress intensity factors. Int J Numer Methods Eng. 2007;72(10):1219–59. https://doi.org/10.1002/nme.2090.

    Article  MATH  Google Scholar 

  29. Chatterjee A. An introduction to the proper orthogonal decomposition. Curr Sci. 2000;78(7):808–17.

    Google Scholar 

  30. Holmes P, Lumley JL, Berkooz G. Turbulence, cohernet structures dynamical systems and symmetry. Cambridge: Cambridge University Press; 1996. https://doi.org/10.1017/CBO9780511622700.

    Book  MATH  Google Scholar 

  31. Kerschen G, Golinval J-C, Vakakis A, Bergman L. The method of proper orthogonal decomposition for dynamical characterization and order reduction of mechanical systems: an overview. Nonlinear Dyn. 2005;41(1–3):147–69. https://doi.org/10.1007/s11071-005-2803-2.

    Article  MathSciNet  MATH  Google Scholar 

  32. Chaturantabut S, Sorensen DC. Application of pod and deim on dimension reduction of non-linear miscible viscous fingering in porous media. Math Comput Model Dyn Syst. 2011;17(4):337–53. https://doi.org/10.1080/13873954.2011.547660.

    Article  MathSciNet  MATH  Google Scholar 

  33. Grepl MA. Model order reduction of parametrized nonlinear reaction-diffusion systems. Comput Chem Eng. 2012;43:33–44. https://doi.org/10.1016/j.compchemeng.2012.03.013.

    Article  Google Scholar 

  34. Hernández JA, Oliver J, Huespe AE, Caicedo MA, Cante JC. High-performance model reduction techniques in computational multiscale homogenization. Comput Methods Appl Mech Eng. 2014;276:149–89. https://doi.org/10.1016/j.cma.2014.03.011.

    Article  MathSciNet  MATH  Google Scholar 

  35. Radermacher A, Reese S. Pod-based model reduction with empirical interpolation applied to nonlinear elasticity. Int J Numer Methods Eng. 2016;107(6):477–95. https://doi.org/10.1002/nme.5177.

    Article  MathSciNet  MATH  Google Scholar 

  36. Ryckelynck D, Benziane DM. Multi-level a priori hyper-reduction of mechanical models involving internal variables. Comput Methods Appl Mech Eng. 2010;199(17–20):1134–42. https://doi.org/10.1016/j.cma.2009.12.003.

    Article  MathSciNet  MATH  Google Scholar 

  37. Carlberg K, Farhat C, Cortial J, Amsallem D. The gnat method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47. https://doi.org/10.1016/j.jcp.2013.02.028.

    Article  MathSciNet  MATH  Google Scholar 

  38. Mainini L, Willcox K. Surrogate modeling approach to support real-time structural assessment and decision making. AIAA J. 2015;53(6):1612–26. https://doi.org/10.2514/1.J053464.

    Article  Google Scholar 

  39. Gallimard L, Florentin E, Ryckelynck D. Towards error bounds of the failure probability of elastic structures using reduced basis models. Int J Numer Methods Eng. 2017;112(9):1216–34. https://doi.org/10.1002/nme.5554.nme.5554.

    Article  MathSciNet  Google Scholar 

  40. Nouy A. A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations. Comput Methods Appl Mech Eng. 2010;199(23–24):1603–26. https://doi.org/10.1016/j.cma.2010.01.009.

    Article  MathSciNet  MATH  Google Scholar 

  41. Chinesta F, Ammar A, Cueto E. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Arch Comput Methods Eng. 2010;17(4):327–50. https://doi.org/10.1007/s11831-010-9049-y.

    Article  MathSciNet  MATH  Google Scholar 

  42. Chinesta F, Cueto E, Abisset-Chavanne E, Duval JL, Khaldi FE. Virtual, digital and hybrid twins: a new paradigm in data-based engineering and engineered data. Arch Comput Methods Eng. 2018;. https://doi.org/10.1007/s11831-018-9301-4.

    Article  Google Scholar 

  43. Niroomandi S, González D, Alfaro I, Bordeu F, Leygue A, Cueto E, Chinesta F. Real-time simulation of biological soft tissues: a PGD approach. Int J Numer Methods Biomed Eng. 2013;29(5):586–600. https://doi.org/10.1002/cnm.2544.

    Article  MathSciNet  Google Scholar 

  44. Leygue A, Verron E. A first step towards the use of proper general decomposition method for structural optimization. Arch Comput Methods Eng. 2010;17(4):465–72. https://doi.org/10.1007/s11831-010-9052-3.

    Article  MathSciNet  MATH  Google Scholar 

  45. Ammar A, Huerta A, Chinesta F, Cueto E, Leygue A. Parametric solutions involving geometry: a step towards efficient shape optimization. Comput Methods Appl Mech Eng. 2014;268:178–93. https://doi.org/10.1016/j.cma.2013.09.003.

    Article  MathSciNet  MATH  Google Scholar 

  46. Gonzalez D, Masson F, Poulhaon F, Leygue A, Cueto E, Chinesta F. Proper generalized decomposition based dynamic data driven inverse identification. Math Comput Simul. 2012;82(9):1677–95. https://doi.org/10.1016/j.matcom.2012.04.001.

    Article  MathSciNet  Google Scholar 

  47. Zuchiatti S, Feulvarch E, Roux J-C, Bergheau J-M, Perrin G, Tissot S. Application of the proper generalized decomposition to elasto-plastic finite element analysis. In: COMPLAS XIII : proceedings of the XIII international conference on computational plasticity: fundamentals and applications, Barcelona: CIMNE, C; 2015:834–42. http://hdl.handle.net/2117/80447.

  48. Claus S, Kerfriden P. A stable and optimally convergent latin-cutfem algorithm for multiple unilateral contact problems. Int J Numer Methods Eng. 2018;113(6):938–66. https://doi.org/10.1002/nme.5694.

    Article  MathSciNet  Google Scholar 

  49. Nouy A. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput Methods Appl Mech Eng. 2007;196(45–48):4521–37. https://doi.org/10.1016/j.cma.2007.05.016.

    Article  MathSciNet  MATH  Google Scholar 

  50. Nouy A. Generalized spectral decomposition method for solving stochastic finite element equations: invariant subspace problem and dedicated algorithms. Comput Methods Appl Mech Eng. 2008;197(51–52):4718–36. https://doi.org/10.1016/j.cma.2008.06.012.

    Article  MathSciNet  MATH  Google Scholar 

  51. Chevreuil M, Nouy A. Model order reduction based on proper generalized decomposition for the propagation of uncertainties in structural dynamics. Int J Numer Methods Eng. 2012;89(2):241–68. https://doi.org/10.1002/nme.3249.

    Article  MathSciNet  MATH  Google Scholar 

  52. Rubio P-B, Louf F, Chamoin L. Fast model updating coupling Bayesian inference and PGD model reduction. Comput Mech. 2018;. https://doi.org/10.1007/s00466-018-1575-8.

    Article  MathSciNet  MATH  Google Scholar 

  53. Gallimard L, Vidal P, Polit O. Coupling finite element and reliability analysis through proper generalized decomposition model reduction. Int J Numer Methods Eng. 2013;95(13):1079–93. https://doi.org/10.1002/nme.4548.

    Article  MathSciNet  MATH  Google Scholar 

  54. Robens-Radermacher A, Unger JF. Efficient reliability analysis coupling importance sampling using adaptive subset simulation and PGD model reduction. PAMM. 2019;19(1):201900169. https://doi.org/10.1002/pamm.201900169.

    Article  Google Scholar 

  55. Schneider J. Sicherheit und Zuverlässigkeit Im Bauwesen: Grundwissen Für Ingenieure. Zürich: vdf Hochschulverlag AG an der ETH Zürich; 1994. https://doi.org/10.3218/2167-7.

    Book  Google Scholar 

  56. Grasedyck L, Kressner D, Tobler C. A literature survey of low-rank tensor approximation techniques. GAMM Mitteilungen. 2013;36(1):53–78. https://doi.org/10.1002/gamm.201310004.

    Article  MathSciNet  MATH  Google Scholar 

  57. Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Rev. 2009;51(3):455–500. https://doi.org/10.1137/07070111X.

    Article  MathSciNet  MATH  Google Scholar 

  58. Zlotnik S, Díez P, Gonzalez D, Cueto E, Huerta A. Effect of the separated approximation of input data in the accuracy of the resulting pgd solution. Adv Model Simul Eng Sci. 2015;2(1):28. https://doi.org/10.1186/s40323-015-0052-6.

    Article  Google Scholar 

  59. Falco A, Nouy A. A proper generalized decomposition for the solution of elliptic problems in abstract form by using a functional eckart young approach. J Math Anal Appl. 2011;376(2):469–80. https://doi.org/10.1016/j.jmaa.2010.12.003.

    Article  MathSciNet  MATH  Google Scholar 

  60. Falcó A, Montés N, Chinesta F, Hilario L, Mora MC. On the existence of a progressive variational vademecum based on the proper generalized decomposition for a class of elliptic parameterized problems. J Comput Appl Math. 2018;330:1093–107. https://doi.org/10.1016/j.cam.2017.08.007.

    Article  MathSciNet  MATH  Google Scholar 

  61. Zou X, Conti M, Díez P, Auricchio F. A nonintrusive proper generalized decomposition scheme with application in biomechanics. Int J Numer Methods Eng. 2018;113(2):230–51. https://doi.org/10.1002/nme.5610.

    Article  MathSciNet  Google Scholar 

  62. Chamoin L, Pled F, Allier P-E, Ladevèze P. A posteriori error estimation and adaptive strategy for PGD model reduction applied to parametrized linear parabolic problems. Comput Methods Appl Mech Eng. Advances in computational mechanics and scientific computation the cutting edge. 2017; 327:118–46. https://doi.org/10.1016/j.cma.2017.08.047.

  63. Nadal E, Leygue A, Chinesta F, Beringhier M, Ródenas JJ, Fuenmayor FJ. A separated representation of an error indicator for the mesh refinement process under the proper generalized decomposition framework. Comput Mech. 2015;55(2):251–66. https://doi.org/10.1007/s00466-014-1097-y.

    Article  MathSciNet  MATH  Google Scholar 

  64. Lebrun R, Dutfoy A. An innovating analysis of the nataf transformation from the copula viewpoint. Probabilistic Eng Mech. 2009;24(3):312–20. https://doi.org/10.1016/j.probengmech.2008.08.001.

    Article  Google Scholar 

  65. Rosenblatt M. Remarks on a multivariate transformation. Ann Math Stat. 1952;23(3):470–2. https://doi.org/10.1214/aoms/1177729394.

    Article  MathSciNet  MATH  Google Scholar 

  66. Botev ZI, Kroese DP. Efficient Monte Carlo simulation via the generalized splitting method. Stat Comput. 2012;22(1):1–16. https://doi.org/10.1007/s11222-010-9201-4.

    Article  MathSciNet  MATH  Google Scholar 

  67. Cérou F, Del Moral P, Furon T, Guyader A. Sequential Monte Carlo for rare event estimation. Stat Comput. 2012;22(3):795–808. https://doi.org/10.1007/s11222-011-9231-6.

    Article  MathSciNet  MATH  Google Scholar 

  68. Ching J, Au SK, Beck JL. Reliability estimation for dynamical systems subject to stochastic excitation using subset simulation with splitting. Comput Methods Appl Mech Eng. 2005;194(12?16):1557–79. https://doi.org/10.1016/j.cma.2004.05.028. Special Issue on Computational Methods in Stochastic Mechanics and Reliability Analysis.

    Article  MathSciNet  MATH  Google Scholar 

  69. Hsu W-C, Ching J. Evaluating small failure probabilities of multiple limit states by parallel subset simulation. Probabilistic Eng Mech. 2010;25(3):291–304. https://doi.org/10.1016/j.probengmech.2010.01.003.

    Article  Google Scholar 

  70. McKay MD, Beckman RJ, Conover WJ. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics. 1979;21(2):239–45.

    MathSciNet  MATH  Google Scholar 

  71. Breitung K. On subsets and onions: lost in outer space. In: Proceedings of the joint ICVRAM ISUMA UNCERTAINTIES conference. 2018.

  72. Rognes ME, Logg A. Automated goal-oriented error control I: stationary variational problems. arXiv e-prints. 2012; 1204–6643. arxiv:1204.6643.

  73. Alnæs MS, Blechta J, Hake J, Johansson A, Kehlet B, Logg A, Richardson C, Ring J, Rognes ME, Wells GN. The fenics project version 1.5. Arch Numer Softw. 2015;. https://doi.org/10.11588/ans.2015.100.20553.

    Article  Google Scholar 

Download references

Acknowledgements

The research was supported by the German Research Foundation (DFG) under Project 326557591.

Funding

The research was supported by the German Research Foundation (DFG) under project 326557591.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Annika Robens-Radermacher.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Robens-Radermacher, A., Unger, J.F. Efficient structural reliability analysis by using a PGD model in an adaptive importance sampling schema. Adv. Model. and Simul. in Eng. Sci. 7, 29 (2020). https://doi.org/10.1186/s40323-020-00168-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-020-00168-z

Keywords