The general steps to construct a PODbased surrogate model are: obtaining a training set, preprocessing the snapshot matrix, reducing the output, interpolating the amplitudes and lastly postprocessing the approximation. All steps are depicted in Fig. 1, and will be discussed in more detail in this section.
Following the steps as presented in Fig. 1 and elaborating in the following sections, four different surrogate models will be constructed. In order to illustrate the theory with an example, the following output data field is presented. This example is used throughout the discussion, although the approaches are generic and can be applied to any highdimensional multivariate output field. The different approaches discussed in this work can be used as guidelines to construct a surrogate model of other cases. It is expected that the suitability of the methods discussed in this work and of other methods that can perform the same task, may depend on the specific applications. The example output field of a single FE simulation consists of M variables that can be stored in the following \(M \times 1\) result vector that can be partitioned according to 3 different physical fields:
$$\begin{aligned} {\mathbf {y}} = \left\{ \begin{array}{c} \left\{ {\mathbf {y}}_\mathrm {u} \right\} \\ \left\{ {\mathbf {y}}_\upvarepsilon \right\} \\ \left\{ {\mathbf {y}}_\upsigma \right\} \\ \end{array}\right\} \end{aligned}$$
(1)
in which \({\mathbf {y}}_\mathrm {u}\) contains the \(M_\mathrm {u}\) displacements, \({\mathbf {y}}_\upvarepsilon \) contains the \(M_\upvarepsilon \) equivalent plastic strains, and \({\mathbf {y}}_\upsigma \) contains the \(M_\upsigma \) stress components.
The approaches described in the following sections can be applied to any other result field that is described by different physical fields. Different steps of the preprocessing and decomposition procedure may be either omitted or performed in several ways, leading to the following four approaches for construction of a PODbased surrogate model considered in this paper:

1
Only centering data by subtracting the mean value from the data (section ‘Zero centering’).

2
Centering, scaling and decomposing all data at once (section ‘Scaling each physical part’).

3
Centering data and decomposing each physical part independently (section ‘Separate reduction of each physical part’).

4
Centering data and assemble basis vectors from one physical quantity (section ‘Assembly from one physical part’).
The approaches 1 and 3 are commonly applied in surrogate modelling of FE analyses [5, 28,29,30,31], while approach 2 is often applied in surrogate modelling of CFD analyses [14, 32]. Scaling the data means that each variable, physical part or the total snapshot matrix is multiplied with a constant. Approach 4 is novel, in this approach one single physical part is used to assemble the basis vectors describing the total field. In the following sections, the steps of the surrogate model construction and the differences between the four approaches will be explained in detail.
Obtaining a training set
The input parameter space \({\mathbf {x}}\) is a \(N_\mathrm {dim}\)dimensional space that will be sampled with \(N_\mathrm {exp}\) sample points. The sample points \({\mathbf {x}}_i\) in the initial sample set \({\mathbf {X}}\) can be generated using different sampling strategies. On each sample point a simulation is performed of which the data can be stored in the result vector \({\mathbf {y}}_i\). The snapshot matrix \({\mathbf {Y}}\) is obtained by collecting the simulation results as its columns:
Many different sampling strategies can be found in literature. In the construction of PODbased surrogate models it is common practice to combine different sampling strategies to obtain the initial sample set, e.g. in the work of Havinga [22] a full factorial design is combined with a spacefilling Latin hypercube sampling (LHS) and in the work of Steffeslai [30] a starpoint design is extended with more sample points using a custom infill strategy. A full factorial design consists of \(2^{N_\mathrm {dim}}\) sample points that are different combinations of the input parameters at their minimum or maximum value. The full factorial design is used to properly cover the boundaries of the parameter space [22]. A star point design consists of a center point, that is the nominal setting, and \(2N_\mathrm {dim}\) star points. The star points are constructed by setting each input parameter oneatatime to its minimum and maximum value, while other input parameters are kept at their nominal value. The star point design can be used for sensitivity analysis [30]. To generate a LHS the parameter space is divided into \(N_\mathrm {exp}^{N_\mathrm {dim}}\) \(N_\mathrm {dim}\)dimensional hypercubes. Sample points are placed within the hypercubes under the condition that there are no sample points in the hypercubes perpendicular to a hypercube that is containing a sample point. If this is done such that the LHS is spacefilling, this generally leads to a uniform sampling over the whole parameter space.
Preprocessing the snapshot matrix
The snapshot matrix obtained based on the initial sample set as described in the previous section can be used for decomposition without any further modifications. For instance, no modifications to the snapshot matrix are made in the work of Bocciarelli et al. [21]. However, when a suitable preprocessing method is applied to the snapshot matrix, the information content of the first basis vectors can be increased. We define the preprocessing of the snapshot matrix as:
$$\begin{aligned} {\mathbf {Y}}^* = f_*({\mathbf {Y}}) \end{aligned}$$
(3)
In which \(f_*(\cdot )\) is the preprocessing function that transforms the snapshot matrix into the preprocessed snapshot matrix \({\mathbf {Y}}^*\). The superscript \(^*\) and subscript \(_*\) denote the applied preprocessing method, e.g. ‘0’ for zero centering and ‘\(\mathrm {scaled}\)’ for scaling per physical part.
Zero centering
In the construction of nonintrusive PODbased surrogate models it is common practice to subtract the mean result from the snapshot matrix before decomposing [33]. Subtracting the mean centers the data around zero, the resulting matrix is therefore also referred to as the deviation matrix [14]. Subtracting the mean increases the information captured in the first basis vectors. When the mean is not subtracted, the first basis vector will dominate the decomposition and will point towards the mean [16, 34]. As basis vectors will be orthogonal, the subsequent basis vectors will be constrained by the direction of the first basis vector, resulting in a suboptimal decomposition of the data set. An extensive comparison between noncentred and centred PCA is given by Cadima and Jolliffe [35]. Although subtracting the mean has been shown beneficial in the construction of nonintrusive PODbased surrogate models [36], there are also applications known in which subtracting the mean is not compatible, for example in incremental SVD [37].
To obtain the zero centred snapshot matrix the mean over each row, and hence each variable, in the original snapshot matrix is subtracted.
$$\begin{aligned} {\mathbf {Y}}^0 = f_0({\mathbf {Y}}) = {\mathbf {Y}}  \overline{{\mathbf {y}}}{\mathbf {1}}^T \quad \text {where:} \quad \overline{{\mathbf {y}}} = \frac{1}{N_\mathrm {exp}}{\mathbf {Y}}{\mathbf {1}} \end{aligned}$$
(4)
in which \({\mathbf {1}}\) is a \(N_\mathrm {exp}\times 1\) vector of ones. Note that the rank of the zero centered snapshot matrix is one lower than the rank of the original matrix.
Scaling the snapshot matrix
Scaling the entries of the zero centered snapshot matrix is a common preprocessing method in Principal Component Analysis. The scaling can be performed to the entire snapshot matrix, to each physical part separately or per variable in each row. The scaling can be calculated in many different ways. Scaling the data includes, but is not limited to, normalising the data.
The most simple scaling method is to divide the entire snapshot matrix by a constant. This is a common preprocessing method in covariance PCA. To perform covariance PCA the zero centered snapshot matrix is scaled with the square root of the number of experiments. When the snapshot matrix is divided by \(\sqrt{N_\mathrm {exp}}\) or \(\sqrt{N_\mathrm {exp}1}\) the matrix \({\mathbf {Y}}^T{\mathbf {Y}}\) will be a covariance matrix [17, 38]. Dividing a matrix by a constant does not alter the singular vectors that will be found in the SVD. It will only change the magnitudes of the singular values by the same factor. Dividing by a constant therefore does not alter the basis vectors obtained in a decomposition and for that reason is left out in the remainder of this paper.
The scaling can also be performed per variable, that is each row in the snapshot matrix. For example to perform correlation PCA, all rows are divided by their Euclidean norm [17]. A major advantage of scaling by the norm is that the snapshot matrix is divided by something with the same physical quantity. Hence, the entries in the snapshot matrix become dimensionless. Instead of using the norm for scaling the data one can also use the mean, range or standard deviation. Note that the mean of a zero centered snapshot matrix is \({\mathbf {0}}\), and hence the mean \(\bar{{\mathbf {y}}}\) as defined in equation (4) should be used for scaling. As pointed out by Skillicorn [33] dividing by the standard deviation will ensure that most of the values in each row will fall in the range − 1 to + 1. It has been found in previous work [36] that this is not beneficial for the quality of the surrogate model because the variation of the variables with relatively small variations will be amplified, and therefore these variables get a larger contribution to the basis vectors. To illustrate that this is not beneficial for the quality of the surrogate model, we refer to the industrial bending problem as described in “Demonstrator processes” section. The displacements in ydirection in the clamped part of the sheet metal will be very small or even zero. Therefore, the standard deviation will also be very small or zero. Scaling these displacements with their standard deviation will magnify their importance. Hence, when the snapshot matrix is decomposed the basis vectors will capture behavior (and noise) which in practice is barely present. Scaling each row is therefore also left out of consideration in this paper.
Scaling each physical part
The scaling can also be performed on each physical part in the snapshot matrix. As pointed out by Guéntot et al. [14] for a snapshot matrix with different physical quantities it is better to scale per data type. As different physical quantities are present in the result vector, they can be of different order. For example, in the demonstrator process the stress components in MPa are of order \(10^2\), while the strain components are of order \(10^{1}\). This will cause the decomposition to be mainly determined by the stress terms. It is therefore proposed to scale each part of the snapshot matrix. In [14] this is referred to as physical scaling. The zero centered snapshot matrix in the example of this work with scaling applied to each part will take the form:
$$\begin{aligned} {\mathbf {Y}}^\mathrm {scaled} = f_\mathrm {scaled}({\mathbf {Y}})= \left[ \begin{array}{c} \left[ {\mathbf {Y}}_\mathrm {u} \overline{{\mathbf {y}}}_\mathrm {u}{\mathbf {1}}^T \right] \cdot s_\mathrm {u} \\ \left[ {\mathbf {Y}}_\upvarepsilon \overline{{\mathbf {y}}}_\upvarepsilon {\mathbf {1}}^T \right] \cdot s_\upvarepsilon \\ \left[ {\mathbf {Y}}_\upsigma \overline{{\mathbf {y}}}_\upsigma {\mathbf {1}}^T \right] \cdot s_\upsigma \\ \end{array} \right] \end{aligned}$$
(5)
Herein \(s_\mathrm {u}\), \(s_\upvarepsilon \) and \(s_\upsigma \) are the scaling constants for the displacement, equivalent plastic strain and stress respectively. The scaling constants can be for example the mean, standard deviation or range of each physical part [14]. To the best of the authors knowledge, no applications can be found in literature of scaling the snapshot matrix per physical part to model FE simulation data. It is proposed to scale the different parts by their range. To calculate the range of each physical part the minimum over each row and column is subtracted from the maximum over each row and column. For the displacement field the physical scaling constant \(s_u\) can be calculated as:
$$\begin{aligned} s_\mathrm {u} = \left( \max {\mathbf {Y}}_\mathrm {u}  \min {\mathbf {Y}}_\mathrm {u}\right) ^{1} \end{aligned}$$
(6)
The scaling constants for the other parts can be calculated in an equivalent way.
Decomposing the output
The predominant modes in the preprocessed snapshot matrices will now be determined. Hence, the zero centered snapshot (\({\mathbf {Y}}^0\), section ‘Zero centering’) and the snapshot matrix that is scaled per physical part (\({\mathbf {Y}}^\mathrm {scaled}\), section ‘Scaling each physical par’), will be decomposed. The snapshot matrix is decomposed into a new basis before reduction. In this work a singular value decomposition (SVD) is used to find the proper orthogonal basis vectors [39]. The SVD of the preprocessed snapshot matrix takes the following form:
The preprocessed snapshot matrix is decomposed into three matrices: \({\varvec{\Phi }}_*\) that contains the left singular vectors \(\varvec{\upvarphi }_{n,*}\) as its columns, \({\mathbf {D}}_*\) that contains the singular values \(d_{n,*}\) on its diagonal and \({\mathbf {V}}^T_*\) that contains the right singular vectors \({\mathbf {v}}_{n,*}\) as its rows. The subscripts n and \(*\) denote the nth direction in the basis and the applied preprocessing method, respectively. As the singular values are sorted by size from largest to smallest, the most information will be captured by the first singular vectors.
Alternatively the so called method of snapshots can be used [16], though the resulting basis will be the same. In the method of snapshots the basis vectors are constructed from the snapshot matrix by finding the eigenvalues and eigenvectors of the matrix \({\mathbf {C}}_* = ({\mathbf {Y}}^*)^{T}{\mathbf {Y}}^*\) [28]. The orthonormal basis vectors can be found using:
$$\begin{aligned} \varvec{\upvarphi }_{n,*} = {\mathbf {Y}}^* \cdot {\mathbf {v}}_{n,*} \cdot \lambda _{n,*}^{1/2} \end{aligned}$$
(8)
In which \({\mathbf {v}}_{n,*}\) and \(\lambda _{n,*}\) are the eigenvectors and eigenvalues of the matrix \({\mathbf {C}}_* = ({\mathbf {Y}}^*)^{T}{\mathbf {Y}}^*\) respectively. By comparing with equation (7), the link between POD and SVD can be deducted. The eigenvectors of the matrix \({\mathbf {C}}_*\) are the right singular vectors. Hence, the matrix \({\mathbf {V}}_*\) holds the collection of all \(N_\mathrm {exp}\) eigenvectors \({\mathbf {v}}_{n,*}\). The left singular vectors are the eigenvectors of \(({\mathbf {Y}}^*)({\mathbf {Y}}^*)^{T}\) [40]. The singular values are simply the square root of the eigenvalues, hence: \(d_{n,*} = \sqrt{\lambda _{n,*}}\) [41].
Note that both the left and right singular vectors are orthonormal, in that case:
$$\begin{aligned} {\mathbf {V}}_*^T{\mathbf {V}}_* = {\varvec{\Phi }}_*^T{\varvec{\Phi }}_* = {\mathbf {I}} \end{aligned}$$
(9)
Each left singular vector represents a vector in the result space (of length M). The left singular vector matrix can be seen as a new coordinate system in the result space, and can therefore be used as a basis for the data. This new basis can be truncated so that it will contain only the first K basis vectors \(\varvec{\upvarphi }_{n,*}\) with \(n=1...K\). We define the truncated basis with K basis vectors as:
In which the superscript [K] denotes the number of basis vectors in the basis. The amplitudes of the result vectors projected onto the basis vectors are found by multiplying the singular values with the right singular vectors.
The vector \(\varvec{\upalpha }_{n,*}^T\) collects the amplitudes of the \(N_\mathrm {exp}\) result vectors corresponding to basis vector \(\varvec{\upvarphi }_{n,*}\). The Krank approximation of the preprocessed snapshot matrix \({\mathbf {Y}}^{[K,*]}\) can now be written as:
$$\begin{aligned} {\mathbf {Y}}^{[K,*]} = {\varvec{\Phi }}_*^{[K]} {\mathbf {A}}_*^{[K]} \end{aligned}$$
(12)
Reduction of the snapshot matrices
Both the zero centered snapshot (\({\mathbf {Y}}^0\), section ‘zero centered’) and the snapshot matrix that is scaled per physical part (\({\mathbf {Y}}^\mathrm {scaled}\), section ‘Scaling each physical part’) are decomposed using the method described in the previous section, resulting in two different bases \({\varvec{\Phi }}_0\) and \({\varvec{\Phi }}_\mathrm {scaled}\) respectively. These bases that are found using the two different preprocessing methods can both be truncated to K basis vectors to construct the corresponding Krank surrogate models.
Separate reduction of each physical part
The third method for surrogate model construction uses decomposition of each physical part of the zero centered snapshot matrix separately:
$$\begin{aligned} \begin{aligned} {\mathbf {Y}}_\mathrm {u}^0&= \left[ {\mathbf {Y}}_\mathrm {u} \overline{{\mathbf {y}}}_\mathrm {u}{\mathbf {1}}^T \right] = {\varvec{\Phi }}_\mathrm {u} {{\mathbf {D}}_\mathrm {u}} {\mathbf {V}}_\mathrm {u}^T \\ {\mathbf {Y}}_{\upvarepsilon }^0&= \left[ {\mathbf {Y}}_\upvarepsilon  \overline{{\mathbf {y}}}_\upvarepsilon {\mathbf {1}}^T \right] = {\varvec{\Phi }}_\upvarepsilon {{\mathbf {D}}_\upvarepsilon } {\mathbf {V}}_\upvarepsilon ^T \\ {\mathbf {Y}}_{\upsigma }^0&= \left[ {\mathbf {Y}}_\upsigma \overline{{\mathbf {y}}}_\upsigma {\mathbf {1}}^T \right] = {\varvec{\Phi }}_\upsigma {{\mathbf {D}}_\upsigma } {\mathbf {V}}_\sigma ^T \end{aligned} \end{aligned}$$
(13)
Note that scaling the different physical parts does not have any influence on the separate bases as explained in “Scaling each physical part” section. As the physical parts are decomposed separately scaling each part will only change the magnitude of the corresponding singular values of the part.
Assembly from one physical part
In this section a new method for assembling the basis vectors from one physical part, hence a submatrix of the preprocessed snapshot matrix, will be described. The basis will be determined for one of the physical quantities, whereafter the full basis will be assembled from this basis, the full basis will describe the directions of variation in the data that covaries most with the variation of the main physical quantity. The reasoning is that certain physical quantities may be dependent on or covariant with others, and that the modes of dependent quantities may be indirectly captured in the basis of the main quantity. The resulting surrogate model will model the main quantity with maximum accuracy, without much loss of accuracy in the other physical parts.
Note that the reduction method can be applied independently of the chosen preprocessing method. By assembling from one physical part the covariances between the different physical parts are retained, while the dominance of other physical parts can be reduced. By means of example we are going to assemble the basis vectors from the zero centered equivalent plastic strain field. However, the method described below can be applied for assembly from any part of the snapshot matrix. One of the physical parts, in this case the zero centered equivalent plastic strain, is decomposed using SVD as described previously.
$$\begin{aligned} {\mathbf {Y}}_{\upvarepsilon }^0 = {\varvec{\Phi }}_\upvarepsilon {{\mathbf {D}}_\upvarepsilon } {\mathbf {V}}_\upvarepsilon ^T \end{aligned}$$
(14)
Note that the left singular vectors of the equivalent plastic strain have size \(M_\varepsilon \times 1\). The basis vectors that span the entire result space can be assembled by multiplying the right singular vectors and the singular values from the zero centered equivalent strain with the total zero centered snapshot matrix (\({\mathbf {Y}}^0\)):
$$\begin{aligned} {\varvec{\Phi }}_{(\upvarepsilon )} = \left[ \begin{array}{c} \left[ {\varvec{\Phi }}_{{(\upvarepsilon ),}\mathrm {u}} \right] \\ \left[ {\varvec{\Phi }}_{{(\upvarepsilon ),}\upvarepsilon } \right] \\ \left[ {\varvec{\Phi }}_{{(\upvarepsilon ),}\upsigma } \right] \\ \end{array} \right] = {\mathbf {Y}}^0 {\mathbf {V}}_\upvarepsilon {{\mathbf {D}}_\upvarepsilon }^{1} \end{aligned}$$
(15)
In which \({\varvec{\Phi }}_{(\varepsilon )}\) denotes the basis that spans the entire results space assembled from the equivalent plastic strain. Due to the multiplication with the full preprocessed snapshot matrix, the basis vectors are not orthonormal anymore.
To calculate the amplitudes a least squares projection is used:
$$\begin{aligned} {\mathbf {A}}_{(\upvarepsilon )} = ({\varvec{\Phi }}^{T}_{(\upvarepsilon ){,\upvarepsilon }}\ {\varvec{\Phi }}_{(\upvarepsilon ){,\upvarepsilon }})^{1} {\varvec{\Phi }}^{T}_{(\upvarepsilon ){,\upvarepsilon }} {\mathbf {Y}}_{\upvarepsilon }^0 \end{aligned}$$
(16)
The obtained basis can be truncated to obtain the Krank approximation of the preprocessed snapshot matrix assembled from the equivalent plastic strain field:
$$\begin{aligned} \begin{array}{ccc} {\mathbf {Y}}^{[K,(\upvarepsilon )]} = &{} {\varvec{\Phi }}_{(\upvarepsilon )}^{[K]} \quad &{} {\mathbf {A}}_{(\upvarepsilon )}^{[K]} \\ M \times N_\mathrm {exp} &{} M \times K &{} K \times N_\mathrm {exp} \end{array} \end{aligned}$$
(17)
Interpolating the amplitudes
To predict the full result vector at any position of the input parameter space that is not part of the initial sample set \({\mathbf {X}}\), the amplitudes of the basis vectors will be interpolated. Each row of the amplitude matrix \({\mathbf {A}}_*\) in equation (11) consists of the amplitudes corresponding to basis vector n. The entries in vector \(\varvec{\upalpha }_{n,*}\) correspond to each sample point \({\mathbf {x}}_i\), hence it collects the entries \(\alpha _{in,*}\). To construct a continuous surrogate model, the amplitudes corresponding to the results of different input settings will be interpolated.
$$\begin{aligned} \varvec{\upalpha }_{n,*} \rightarrow {\hat{\upalpha }}_{n,*}({\mathbf {x}}) \end{aligned}$$
(18)
In the remainder of this section the subscript that indicates the preprocessing method \(*\) is left out for clarity. One can also interpolate the right singular vectors as they are simply the amplitudes divided by the singular values [30]. The interpolation can be done using different interpolation methods. In this work Radial Basis Functions (RBFs) are used to interpolate the amplitudes. An RBF can be any function that depends on the distance between an arbitrary point in the parameter space \({\mathbf {x}}\) and a sample point in the parameter space \({\mathbf {x}}_i\). The radial basis function corresponding to sample point \({\mathbf {x}}_i\) will be:
$$\begin{aligned} g_i({\mathbf {x}}) = g_i({\mathbf {x}}  {\mathbf {x}}_i) \end{aligned}$$
(19)
Several different basis functions have been proposed in literature. In multiple studies it has been shown that the performance of multiquadric RBF for scalar interpolation is generally good [2, 42]. In a comparative study performed by Hamim [43] on the application of RBF in PODbased surrogate models, it was shown that multiquadric RBF perform best compared to other types of RBFs. Therefore a multiquadric RBF is chosen for interpolation of the amplitudes in all surrogate models:
$$\begin{aligned} g_{n,i}({\mathbf {x}}) = \sqrt{1 + ({\mathbf {x}}  {\mathbf {x}}_i )^{T} \mathbf {\uptheta }_{n} ({\mathbf {x}}  {\mathbf {x}}_i ) } \end{aligned}$$
(20)
In which \(\mathbf {\uptheta }_{n}\) is the \(N_\mathrm {dim} \times N_\mathrm {dim}\) diagonal global scaling matrix of the amplitude corresponding to basis vector n. The global scaling parameters are optimized per basis vector and per surrogate model based on the LeaveOneOut Cross Validation values as proposed in [44].
As the mean result vector is subtracted from the snapshot matrix, the mean amplitude will be (close to) 0. No polynomial detrending is applied before interpolating the amplitudes [45, 46]. The interpolated amplitude is written as the sum of the RBFs \(g_{n,i}({\mathbf {x}})\) multiplied with a weight \(w_{n,i}\):
$$\begin{aligned} {\hat{\upalpha }}_{n}({\mathbf {x}}) = \displaystyle \sum _{i=1}^{N_{\mathrm {exp}}} w_{n,i} g_{n,i}({\mathbf {x}}) \end{aligned}$$
(21)
One radial basis function is placed at the coordinate of each sample point in the input parameter space. The weights of the basis functions can be determined exactly, as the amplitudes are known at this same number of coordinates. To find the unknown weights all RBFs can be evaluated at the initial sample points \({\mathbf {x}}_i\) and collected in an \(N_\mathrm {exp} \times N_\mathrm {exp}\) interpolation matrix \({\mathbf {G}}_n\) [31].
The weights in equation (21) can be solved by setting \( {\hat{\upalpha }}_n({\mathbf {x}}_i) = \alpha _{in}\) leading to:
$$\begin{aligned} {\mathbf {w}}_n^T = \varvec{\upalpha }^T_n {\mathbf {G}}_n^{1} \end{aligned}$$
(23)
Postprocessing
When the amplitudes corresponding to all K basis vectors are interpolated the continuous approximation in a preprocessed basis reads:
$$\begin{aligned} \hat{{\mathbf {y}}}^{[K,*]}({\mathbf {x}}) = \displaystyle \sum _{n=1}^{K} \varvec{\upvarphi }_{n,*} {\hat{\upalpha }}_{n,*}({\mathbf {x}}) \end{aligned}$$
(24)
To construct the final surrogate model, the approximation in the truncated basis must be mapped back to the same form as the initial snapshot matrix. The function that reverses the applied preprocessing method is called the postprocessing function and is denoted with \(f_*^{1}(\cdot )\). The interpolated result vector can be written as:
$$\begin{aligned} \hat{{\mathbf {y}}}^{[K]}_*({\mathbf {x}}) = f_*^{1}(\hat{{\mathbf {y}}}^{[K,*]}({\mathbf {x}}))= f_*^{1}\left( \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,*} {\hat{\upalpha }}_{n,*}({\mathbf {x}}) \right) \end{aligned}$$
(25)
The surrogate model with K basis vectors from the zero centered snapshot matrix can be written as:
$$\begin{aligned} \hat{{\mathbf {y}}}_0^{[K]}({\mathbf {x}}) = \bar{{\mathbf {y}}} + \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,0} {\hat{\upalpha }}_{n,0}({\mathbf {x}}) \end{aligned}$$
(26)
The surrogate model with K basis vectors from the zero centered and scaled snapshot matrix can be written as:
$$\begin{aligned} \hat{{\mathbf {y}}}_\mathrm {{scaled}}^{[K]}({\mathbf {x}}) = \bar{{\mathbf {y}}} + \left( \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,\mathrm {scaled}} {\hat{\upalpha }}_{n,\mathrm {scaled}}({\mathbf {x}}) \right) \oslash {\mathbf {s}} \end{aligned}$$
(27)
In which \(\oslash \) is the Hadamard division, respresenting element wise division and \({\mathbf {s}}\) is the \(M\times 1\) scaling vector with \(M_\mathrm {u}\) entries \(s_\mathrm {u}\), \(M_\upvarepsilon \) entries \(s_\upvarepsilon \) and \(M_\upsigma \) entries \(s_\upsigma \) which are calculated based on equation (6).
The surrogate model with K basis vectors from the zero centered separate snapshot matrices can be written as:
$$\begin{aligned} \hat{{\mathbf {y}}}_\mathrm {sep}^{[K]}({\mathbf {x}}) = \left\{ \begin{array}{c} \left\{ \hat{{\mathbf {y}}}^{[K]}_\mathrm {u}({\mathbf {x}}) \right\} \\ \left\{ \hat{{\mathbf {y}}}^{[K]}_\upvarepsilon ({\mathbf {x}}) \right\} \\ \left\{ \hat{{\mathbf {y}}}^{[K]}_\upsigma ({\mathbf {x}}) \right\} \\ \end{array}\right\} = \left\{ \begin{array}{c} \left\{ \overline{{\mathbf {y}}}_\mathrm {u} + \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,\mathrm {u}} {\hat{\upalpha }}_{n,\mathrm {u}}({\mathbf {x}}) \right\} \\ \left\{ \overline{{\mathbf {y}}}_\upvarepsilon + \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,\upvarepsilon } {\hat{\upalpha }}_{n,\upvarepsilon }({\mathbf {x}}) \right\} \\ \left\{ \overline{{\mathbf {y}}}_\upsigma + \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,\upsigma } {\hat{\upalpha }}_{n,\upsigma }({\mathbf {x}}) \right\} \\ \end{array} \right\} \end{aligned}$$
(28)
When separate snapshot matrices are used a different number of basis vectors in the truncated basis K can be chosen for each physical part. Note that, when using separate bases for each physical quantity, one surrogate model must be fitted per basis vector per physical quantity, whereas the other decomposition methods require fitting of only one surrogate model per basis vector for all physical quantities together.
The surrogate model with K basis vectors from the zero centered snapshot matrix that is assembled from the strain field can be written as:
$$\begin{aligned} \hat{{\mathbf {y}}}_{(\upvarepsilon )}^{[K]}({\mathbf {x}}) = \bar{{\mathbf {y}}} + \sum \limits _{n=1}^K \varvec{\upvarphi }_{n,(\upvarepsilon )} {\hat{\upalpha }}_{n,(\upvarepsilon )}({\mathbf {x}}) \end{aligned}$$
(29)