- Research article
- Open Access
- Published:

# \({\text {FE}}^r\) method with surrogate localization model for hyperelastic composite materials

*Advanced Modeling and Simulation in Engineering Sciences*
**volume 7**, Article number: 39 (2020)

## Abstract

This study presents a method for constructing a surrogate localization model for a periodic microstructure, or equivalently, a unit cell, to efficiently perform micro-macro coupled analyses of hyperelastic composite materials. The offline process in this approach is to make a response data matrix that stores the microscopic stress distributions in response to various patterns of macroscopic deformation gradients, which is followed by the proper orthogonal decomposition (POD) of the matrix to construct a reduced order model (ROM) of the microscopic analysis (localization) with properly extracted POD bases. Then, response surfaces of the POD coefficients are constructed so that the ROM can be continuous with respect to the input datum, namely, the macroscopic deformation gradient. The novel contributions of this study are the application of the L2 regularization to the interpolation approximations of the POD coefficients by use of radial basis functions (RBFs) to make the response surfaces continuous and the combined use of the cross-validation and the Bayesian optimization to search for the optimal set of parameters in both the RBFs and L2regularization formula. The resulting model can be an alternative to microscopic finite element (FE) analyses in the conventional \({\text {FE}}^2\) method and realizes \({\text {FE}}^r\) with \(1<r<<2\) accordingly. Representative numerical examples of micro-macro coupled analysis with the \({\text {FE}}^r\) are presented to demonstrate the capability and promise of the surrogate localization model constructed with the proposed approach in comparison with the results with high-fidelity direct \({\text {FE}}^2\).

## Introduction

As a counterpart of theoretical mechanics for heterogenous media [1], which was developed for design support of composite materials since 1950s, mathematical theory of homogenization [2,3,4] was developed in applied mathematics around mid 70s as an area of variational methods and functional analysis. In the field of computational mechanics, a class of methods based on the homogenization theory is referred to as *computational homogenization* and is known to be a key technology to realize two-scale, or equivalently, micro-macro analyses; see, for example, [5,6,7,8]. In particular, the most distinctive feature is the set of “homogenization” and “localization” processes, in which we carry out numerical analyses to evaluate macroscopic material responses with microscopic ones being input data and microscopic structural responses with macroscopic ones being input data, respectively [9, 10].

Although there are a variety of methods of two-scale analyses based on homogenization theory, a periodic microstructure (unit cell) is in common associated with a material point in the macrostructure. Also, the unit cell is identified with a representative volume element (RVE) introduced in micromechanics, which is the domain for taking the volume averages of microscopic stress and strain to evaluate the macroscopic ones. Because of this simple structure, computational homogenization enables us to easily obtain the relationship between macroscopic stress and strain with the help of a friendly discretization method such as the finite element method (FEM). However, the obtained relationship is just as a numerical or discrete data set and thus function forms of the macroscopic constitutive equations are never provided except for linear problems. Having an eye on this feature, Terada and Kikuchi [11] proposed the method of micro-macro coupled two-scale analyses, by which the macroscopic boundary value problem (BVP) is solved without a macroscopic constitutive law, but with the macroscopic constitutive response obtained as a numerical solution of the microscopic BVP in a unit cell, which is located at each calculation point of the macrostructure. While the macro- and microscopic BVPs were not fully implicitly solved in their solution algorithm, a consistent method of micro-macro coupled two-scale analyses was proposed by Feyel *et al.* [12, 13] and subsequently referred to as \({\text {FE}}^2 \) in reflection of the nested structure of micro- and macroscopic FE analyses. At the same time, Terada and his co-workers [14, 15] presented the formulations adherent to the mathematical homogenization elaborated with the theory of generalized and/or two-scale convergence [16]. However, these original developments of \({\text {FE}}^2\) is too expensive and impractical as commonly recognized, even though parallel computing is employed [17].

For a solution to emerge, Terada and Kikuchi [11] practiced the use of a database storing macroscopic stress components in the macroscopic strain space, which can be constructed by carrying out a series of numerical material tests (NMTs) on a unit cell. However, its linear interpolation with those data points was not suitable for implicit solution methods to solve macroscopic BVPs and could not meet subsequent deployments. Along this line, Yvonnet *et al.* [18, 19] took another database approach, in which the outer product decomposition, called PARAFAC, is applied to determine a numerically explicit potential (NEXP) of the macroscopic strain energy by use of spline functions. Since the NEXP has a smooth function form, it can be directly utilized for the macroscopic analyses. Their more recent work has adopted neural network to more accurately construct NEXP, which carries the advantage over the use of PARAFAC in determining more than ten independent parameters in the three-dimensional (3D) potential function form [20]. These can be regarded as data-driven approaches for two-scale analyses, as a database storing macroscopic responses is developed in advance, or equivalently, offline, by a number of high-fidelity computations to solve a microscopic BVP.

In this context, recent years have seen a renewal of interest in the use of data scientific techniques to construct reduced order models (ROM) of high-fidelity computations in two-scale computational homogenization. Especially with a view to \({\text {FE}}^2\)-type micro-macro coupled analyses, a pioneering work was presented by Yvonnet and He [21] who proposed a ROM-based offline-online computational framework. In their method, a database of microscopic displacements constructed by performing NMTs on a unit cell of hyperelastic composite materials and proper orthogonal decomposition (POD) is applied to it to extract basis vectors characteristic of the microscopic mechanical behavior. Owing to the resulting ROM, microscopic linearized equations are solved in the dimensionally reduced space so that the computational cost in solving microscopic BVPs in a \({\text {FE}}^2\) calculation can be significantly reduced. An effort on improving computational efficiency for offline computations has also recently made by Fritzen and Kunc [22], who applied POD to the data set of microscopic fluctuation strains obtained by a limited number of NMTs and then solved minimization problems to effectively construct the above-mentioned NEXP.

Other noteworthy contributions have been presented by Fritzen and co-workers [23,24,25], who developed the potential based reduced basis model order reduction (pRBMOR) by extending the Non-uniform Transformation Field Analysis (NTFA) [26] developed for inelastic composite materials. They applied POD to the microscopic plastic strain field to construct a ROM and determined its coefficients by solving the extremal problem of the relevant macroscopic incremental potential; see also References [27,28,29,30]. An eariler study by Oskay and Fish [31] is also worth specific mention as an effort to reduce the computational cost for microscopic analyses in \({\text {FE}}^2\) by combined use of Transformation Field Analysis (TFA) and homogenization theory. However, most of these ROM-oriented researches have strived to ‘reduce’ the computational cost in solving a microscopic BVP for more effective \({\text {FE}}^2\) computations, but not to ‘replace’ a microscopic analysis, or equivalently *localization*, by its surrogate model.

This study proposes a surrogate localization model to perform micro-macro coupled analyses of hyperelastic composite materials with unit cells, for which microscopic analyses to be conducted at each macroscopic calculation point are replaced by a sort of ROM constructed by the application of POD to the response data set consisting of microscopic stresses. The response data matrix is created by conducting a series of microscopic analysis in response to various patterns of macroscopic deformation gradient and, therefore, the resulting ROM is for “localization” in the sense that the macroscopic deformation is localized in a unit cell. In order that the ROM can be continuous with respect to the input datum, namely, the macroscopic deformation gradient, we construct the response surfaces of the POD coefficients of the ROM using RBFs. Additional contributions of this study are the application of the L2 regularization to the interpolation approximations of POD coefficients by use of radial basis functions (RBFs) to make the response surfaces continuous and the combined use of the cross-validation and the Bayesian optimization to search for both the optimal set of parameters in the RBFs and the L2regularization formula. The resulting model replaces microscopic finite element (FE) analyses in the conventional \({\text {FE}}^2\) method and can be termed \({\text {FE}}^r\) with \(1<r<<2\) accordingly. Then, representative numerical examples of micro-macro coupled analyses with the \({\text {FE}}^r\) are presented to demonstrate the capability and promise of the proposed surrogate localization model in comparison with the results with high-fidelity direct \({\text {FE}}^2\).

## Two-scale boundary value problem

A two-scale boundary value problem (BVP) is summarized in a general setting. Although an arbitrary constitutive law can be employed for constituents in a periodic microstructure (unit cell) in nature, we confine ourselves to hyperelastic composite materials in this study. In the formulation, field variables like \({\tilde{\bullet }}\) indicate macroscopic ones and those without tildes are microscopic ones if not otherwise specified.

### Two-scale kinematics

Let us consider a quasi-static equilibrium problem of a heterogeneous material with periodic microstructures, i.e., unit cells, whose overall domain is denoted by \(\varOmega ^{\varepsilon } \in {\mathbb {R}}^{n_{\text {dim}}}\), in the initial configuration. Here, \(n_{\text {dim}}\) is the spatial dimension. Along the lines of mathematical theory of homogenization with relevant assumptions [2,3,4], we denote macro- and micro-scales by \({\varvec{X}}\) and \({\varvec{Y}}\), the latter of which is introduced to measure fine scale mechanical behavior in a single unit cell. According to the consequence of generalized convergence theory [16], the domain \(\varOmega ^{\varepsilon }\) can be separated into the macroscopic domain of an equivalent homogeneous material, \({\tilde{\varOmega }}^0\), and that of a unit cell, \(\varOmega ^0\), located at each macroscopic material point \({\varvec{X}}\in {\tilde{\varOmega }}^0\). In what follows, all the field variables are assumed to be periodic with respect to \({\varvec{Y}}\), i.e., Y-periodic.

The macro- and microscopic deformation gradients, \(\tilde{{\varvec{F}}}\left( {\varvec{X}}\right) \) and \({\varvec{F}}\left( {\varvec{X}},{\varvec{Y}}\right) \), are given, respectively, as

where \(\tilde{{\varvec{u}}}\left( {\varvec{X}}\right) \) is the macroscopic displacement, \({\varvec{u}}'\left( {\varvec{X}},{\varvec{Y}}\right) \) is the fluctuation displacement at the micro-scale and \({\varvec{1}}\) is the 2nd order identity tensor. Here, \(\nabla _{X}\) and \( \nabla _{Y}\), are differential operators with respect to the macro- and micro-scales, respectively. Also, the fluctuation displacement satisfies the following constraint condition to ensure the existence of the solution for the microscopic equilibrium problem [2,3,4]:

In terms of the microscopic displacement denoted by \({\varvec{w}}\left( {\varvec{X}},{\varvec{Y}}\right) \) at \({\varvec{Y}}\in \varOmega ^0\), the microscopic deformation gradient can also be represented as

Thus, the microscopic displacement is obtained as

where \(\tilde{{\varvec{H}}}\left( {\varvec{X}}\right) \) is the macroscopic displacement gradient and the vector of integration constants have been neglected without loss of generality. It can be easily confirmed from Eqn. (2) that the volume average of the microscopic deformation gradient over the unit cell is equal to the macroscopic one as

because the fluctuation displacement is Y-periodic. Here, \(|\varOmega ^{0}|\) is the volume of the unit cell.

Due to the Y-periodicity of the fluctuation displacement, the following constraint conditions are imposed on the unit cell boundaries \(\partial {\varOmega ^{[\pm J]}_{0}}\):

where superscripts \([\pm J]\) indicate the evaluation on the boundaries \(\partial {\varOmega ^{[\pm J]}_{0}}\). Here, we have assumed that the unit cell is a rectangle or rectangular parallelepiped in case of \(n_{\text {dim}}=2\) or \(n_{\text {dim}}=3\), respectively. Combining this with Eqn. (5), the microscopic displacement is constrained as

where \({\varvec{L}}^{[J]}\) is the vector connecting a pair of material points on the boundary surfaces \(\partial {\varOmega }^{[\pm J]}_{0}\) and is defined as \({\varvec{L}}^{[J]} := {\varvec{Y}}|_{\partial {\varOmega }^{[+J]}_{0}} - {\varvec{Y}}|_{\partial {\varOmega }^{[ -J]}_{0}}\) [32].

### Microscopic BVP

The microscopic displacement \({\varvec{w}}\left( {\varvec{X}},{\varvec{Y}}\right) \) is the solution of the following equilibrium equation:

where \({\varvec{P}}\left( {\varvec{X}},{\varvec{Y}}\right) \) is the microscopic 1st Piola-Kirchhoff (PK) stress that can be determined an arbitrary set of constitutive equations. In this study, we confine ourselves to composite materials without inelastic deformations so that the following form can be adopted:

where \(\psi \left( {\varvec{F}} \right) \) is a strain energy function. Together with the boundary conditions in Eqn. (8), these equations consist of the microscopic BVP.

### Macroscopic BVP

The equilibrium equation in terms of the macroscopic 1st PK stress \({\tilde{\varvec{P}}}\) is given as

which has to be solved for the macroscopic displacement \(\tilde{{\varvec{u}}}\left( {\varvec{X}}\right) \) along with Eqn. (1) and the relevant relationship between \({\tilde{\varvec{P}}}({\varvec{X}})\) and \({\tilde{\varvec{F}}}({\varvec{X}})\). Here, \(\tilde{{\varvec{b}}}\) is the body force, but is dispensable for the sake of simplicity in the subsequent sections.

Although no specific function form of \({\tilde{\varvec{P}}}({\varvec{X}})\) is provided in mathematical homogenization, the following relationship must be satisfied along with Eqn. (6):

That is, the macroscopic stress is equal to the volume average of the microscopic stress over the unit cell. Therefore, the macroscopic 1st PK stress that satisfies the macroscopic equilibrium equation (11) can be determined through Eqn. (12), for which the microscopic BVP has to be solved with the macroscopic deformation gradient \(\tilde{{\varvec{F}}} = \tilde{{\varvec{H}}} + \mathbf{1 }\) being data.

### Solution scheme: \({\text {FE}}^2\)

As can be seen from the formulation above, the micro- and macroscopic BVPs complement each other, or equivalently, coupled with each other via the relationships (6) and (12). Thus, the set of these problems referred to as two-scale BVP. The corresponding solution method by use of the finite element method (FEM) is called \({\text {FE}}^2\), since FE analyses have to be carried out to solve the microscopic BVPs with input data of the macroscopic deformation gradients that have been evaluated at all the calculation points in the FE model for the macroscopic BVP. In the case of implicit solution methods with the Newton-Raphson iterative procedure, the microscopic stress satisfying the microscopic equilibrium equation is referred at every iterative process and time step to solve the macroscopic BVP. Therefore, the computational cost is tremendously high when practical problems are to be solved. Although there are some remedies to reduce the cost such as parallel computations [17] and the decoupling [32,33,34] or mixing [26, 35] schemes, the present study takes the reduced-order approach based on the proper orthogonal decomposition, which is the subject of the next section, by partially following the previous developments in the literature.

## Order reduction by proper orthogonal decomposition

This section is devoted to a brief summary of the proper orthogonal decomposition (POD), which was first utilized by Lumley [36] in 1967 to extract the flow structures characteristic of turbulent flows for the purpose of model order reduction (MOR).

First, let us define an \(N_s\)-dimensional response-data vector in response to a certain input datum \({\varvec{f}}_i\) as

where *i* ranges from 1 to \(N_c\) that is the number of input data. By standardizing the response data vectors by their mean \(\overline{\varvec{\phi }} := \frac{1}{N_c} \sum _{i=1}^{N_c} \varvec{\phi }_{i} \in {\mathbb {R}}^{N_\mathrm {s}}\), we define the following response data matrix:

We also define the sample covariance matrix as

It is known that the maximization problem of the variance of the response data vectors reduces to the eigenvalue problem of the sample covariance matrix as [36]

where \(\lambda _j\) are the eigenvalues and \({\varvec{s}}_j \in {\mathbb {R}}^{N_s}\) are the normalized eigenvectors that are orthogonal to each other. Also, the index “*j*” indicates the number of spatial dimension, in which the eigenvalues are aligned to descending order \(\left( \lambda _1> \ldots> \lambda _j> \ldots > \lambda _{N_{\mathrm {c}}}\right) \). If the sample covariance matrix is defined as

the corresponding eigenvalue problem becomes

along with the following set of relationships:

Here, \(|| \bullet ||\) denotes the Euclidean norm. Since the eigenvalue problems, (16) and (18), are equivalent via these relationships, we can choose either of them according to the magnitude relationship between \(N_s\) and \(N_c\) [37]. Thus, the dimension of the sample covariance matrix is set at \(n:= \min (N_s, N_c)\) depending on the situation.

On the other hand, the singular value decomposition of \(\overline{{\varvec{Q}}}\) is given as

where we have defined the constituent matrices as

By use of these components, the POD of the response data vector in response to input datum \({\varvec{f}}_i\) can be expressed as

Here, we have defined the POD coefficients as \(\alpha _{ij} := V_{ij} \sigma _j\) and, in turn, \({\varvec{s}}_j\) are referred to as POD bases [38].

In order to extract the significant eigenmodes, we define the following contribution rate of mode *j* as a quantitative index of content rate of the information about the original data:

Also, introducing a threshold \(\delta \) to measure the information loss with respect to the cumulative contribution rate, we determine the number of modes to be extracted, \(N_r \le n:=\min (N_s, N_c)\), such that

With this number of POD bases, the response data vector can be approximated as

This realizes the model order reduction (MOR) of the original response data set, because the dimension of the space spanned by the POD bases has been reduced from either \(N_s\) or \(N_c\) to \(N_r\). The procedure of the MOR is summarized in Box 1.

## Interpolation approximation of POD coefficients

The POD coefficients \(\alpha _{ij}\) in the approximation (25) obtained by the MOR are discrete numbers corresponding to input data \({\varvec{f}}_i\), but each of them can be regarded as the value of a certain continuous scalar-valued function evaluated at data point \({\varvec{f}}_i\). In this section, we construct such a function by the interpolation with radial basis functions (RBFs) and present a method to make the overall function form smooth.

### RBF interpolation

The RBF interpolation [39] is a method to construct an approximate function \(\varPhi ({\varvec{f}})\) that takes known values \(\varphi _{i}\) at a data point \( {\varvec{f}}_i \in {\mathbb {R}}^{N}\). Due to the nature of the RBFs, the function \(\varPhi ({\varvec{f}})\) thus interpolated necessarily becomes continuous.

Let us first define the following Euclidian norm that represents the distance between an independent variable \({\varvec{f}}\) and a data point \({\varvec{f}}_i\):

A RBF is a certain type of function with this distance function being an independent variable. In this study, we employ the following Gaussian functions:

Using these as bases, we construct a continuous function such that

where \(\beta \) is a parameter revealing a correlation with the distribution density of \({\varvec{f}}_i\). Here, the coefficients \(\omega _i\) of \(\psi \left( r_i({\varvec{f}}) \right) \) are called weights and can be determined in the following way:

We evaluate Eqn. (28) at the *k*-th input point \({\varvec{f}}_k\) as

Then, the system of linear equations can be obtained as

where we have defined the vectors \(\varvec{\varphi }\) and \(\varvec{\omega }\) as

Also, the coefficient matrix \(\varvec{\psi }\), which is referred to as kernel matrix, has been defined as

which is regular in most cases. Thus, the weights can be obtained as \(\varvec{\omega } = \varvec{\psi }^{-1} \varvec{\varphi }\).

### Smoothing response surfaces

When the weights obtained by the above-mentioned procedure used in Eqn. (28), their response surfaces are continuous with respect to the independent variable \({\varvec{f}}\), but often become rugged as will be seen later. Also, the undulations of their derivatives become prominent and, therefore, inadequate as functions representing physical quantities. In order to suppress such undulations, or equivalently, to smooth the response surfaces of Eqn. (28), we replace the system of linear equations (30) by

where \(\eta \) is a smoothing parameter. This process is called the L2 regularization [40] and has been utilized to mitigate the ill effect of over-training in the area of machine learning.

However, since the function (28) with the weight with the L2 regularization in the data point \({\varvec{f}}_k\) does not necessarily reproduce \(\varphi _k\), the most appropriate value of parameter \(\eta \) must be chosen so that the profile of the response surface and the accuracy of interpolation approximation can be optimized simultaneously. In addition, when the Gaussian functions (27) are employed as bases, Gaussian parameter \(\beta \) is also influential on the smoothing process. Therefore, the theoretical determination of the optimal set of parameters \((\eta ,\beta )\) is almost impossible, while a try-and-error process must be inefficient. In this study, we propose to apply the cross-validation [41] combined with the Bayesian optimization [42, 43] in terms of the difference between the data points and the corresponding values of the approximation function.

In the cross-validation, \(N_{\mathrm {c}}\) data are divided into two groups; test data \(\varvec{\varphi }' \in {\mathbb {R}}^{N_{c}^{\text {Test}}}\) for error estimation of trained results and training data \(\varvec{\varphi }''\in {\mathbb {R}}^{N_{c}^{\text {Training}}}\). By use of the training data only, we obtain weights \(\varvec{\omega }\in {\mathbb {R}}^{N_{c}^{\text {Training}}}\) by solving Eqn. (33). Then, using these weights, we evaluate the approximation function Eqn. (28) at the \(N_{c}^{\text {Test}}\) corresponding data points, with \(N_c\) being replaced by \(N_{c}^{\text {Training}}\), and calculate the error of training in comparison with the test data. To avoid the bias due to an arbitrarily selected test data, we define the ‘average’ error by taking average of the errors calculated with \(N_{\mathrm {cv}}\) different patterns of division.

Parameters \(\beta , \eta \) have to be determined so as to minimize the average error computed this way. Although there might be some possible optimization methods for the minimization, we employ the Bayesian optimization [42, 43] in view of its preferable features for the present purpose. In fact, the Bayesian optimization is known to search for the solution closely in domains with smaller errors and sometimes in domains with low exploration frequencies to avoid lapsing into a local solution. Also, it is known that it provides a global optimum solution for the continuous approximate function with the smaller number of trials than other possible methods such as a grid search.

Figure 1 shows the schematic diagram to search for the optimal set of parameters \((\eta ^* , \beta ^*)\) by the combined use of the cross-validation and Bayesian optimization. With an initial set of parameters \((\eta ^0 , \beta ^0)\) suitably selected, a trial cycle of average error estimation and optimization process is repeated some times. The iterative process is terminated if the error variation becomes sufficiently small. The concrete procedure in the context of micro-macro coupled analysis will be demonstrated later in the representative numerical example. Finally, the procedure of RBF interpolation along with the parameter optimization is summarized in Box 2.

## \({\text {FE}}^r\) method

Following the above-explained procedure to construct continuous and smooth POD coefficients, we propose a method of micro-macro coupled analysis with a POD-based surrogate localization model as an alternative to \({\text {FE}}^2\). Since the surrogate localization model, which is a ROM of microscopic analysis, significantly reduces the computational cost of \({\text {FE}}^2\), the method is referred to as \({\text {FE}}^r\) where \(1<r<<2\) in this study.

### Offline computations

In the proposed method, data points \({\varvec{f}}_i\) and response data vectors \({\varvec{\phi }}_i\) in Eqn. (13) correspond to the macroscopic deformation gradients \(\tilde{{\varvec{F}}}\) in Eqn, (8) and the microscopic stresses \({\varvec{{P}}}\) at all the calculation points in a unit cell’s FE model prepared for solving microscopic equilibrium problem (9), respectively. Thus, in the *offline* process in the proposed \({\text {FE}}^r\), the response data matrix corresponding to Eqn. (14) is first prepared by carrying out a series of localization analyses with various patterns of macroscopic deformation gradients \(\tilde{{\varvec{F}}}\). Then, applying POD to produce the ROM of the localization, we construct continuous approximate functions of the resulting POD coefficients by use of RBFs. Finally, the parameter tuning is conducted by the combined use of the cross-validation and Bayesian optimization in the L2 regularization. In the subsequent section, these offline computations are explained in order.

First, various patterns of the macroscopic deformation gradients are prepared on the assumption of plane strain states. Here, the patterns are chosen so that the microscopic mechanical behavior can be well characterized. In this study, data points are throughly selected in the four dimensional space of macroscopic deformation gradients such that

where \(\delta _{jk}\) is the Kronecker’s delta symbol. Here, \({{\tilde{F}}}^{\mathrm {min}}_{jk}\) and \({{\tilde{F}}}^{\mathrm {max}}_{jk}\)are the minimum and maximum values of the macroscopic deformation gradient for localization analyses, and have to be set in consideration of the range of deformation in the macroscopic analysis. The total number of the data points is denoted by \(N_c\) for later use.

Second, localization analyses are carried out with the \(N_c\) patterns of macroscopic deformation gradients and the response data vectors are constructed as

where \({\varvec{P}}_{il}\) is the microscopic 1st PK stress in Voigt form at calculation point *l* in element *e* and have \(N_p\) ( \(=4\) in case of plane strain problems) components. Since the calculation points commonly corresponds to Gaussian integration points in FEA, the calculation point number is identified with \(l:=(e-1)\times N_g + k\) where \(N_g\) is the number of Gaussian integration points per element *e* and *k* stands for the element number. Thus, each response data vector consists of \(N_s = N_e\times N_g\times N_p \) components.

Third, with reference to Box 1, POD is applied to the response data matrix that is the collection of the data vector constructed above. Studying the magnitude relationship of the contribution rates corresponding to the POD modes, we extract \(N_r\) modes to construct the ROM of microscopic analysis with \(\tilde{{\varvec{F}}}_i\) being input data point such that

where the POD coefficients \(\alpha _{ij}\) and the mean response data vector \(\overline{\varvec{\phi }}\) have been defined respectively as

Here, the order of components in each of these vectors is in accordance with that of POD bases \({\varvec{s}}_j\) and Eqn. (35). Furthermore, \({\hat{\bullet }}\) is a value at each microscopic Gaussian point.

Fourth, by applying RBF interpolation approximation, we construct the response surfaces of the POD coefficients \(\alpha _{ij}\), which have been obtained as discrete values. It is hence assumed that each of these coefficients associated with the *j*-th POD basis \({\varvec{s}}_j\) is the value of a continuous function \({\mathcal {A}}_{j} ({\tilde{\varvec{F}}})\) evaluated at data point \(\tilde{{\varvec{F}}}_i\) such that

Then, identifying this function \({\mathcal {A}}_{j} ({\tilde{\varvec{F}}})\) with \(\varPhi \left( {\varvec{f}}\right) \) in Eqn. (28), we construct its approximation function by use of the Gaussian RBFs such that

where \(r_i (\bullet )\) and \(\psi \left( r_i (\bullet ) \right) \) have been defined in Eqns. (27) and (28), respectively. Following the procedure presented in Box 2, we can easily calculate the weights as \({\varvec{\omega }}_j = {\varvec{\varphi }} ^{-1} {\varvec{\alpha }}_j\).

As mentioned in the previous section, the direct use of the weights obtained as solutions of Eqn. (30) is known to cause a highly oscillatory response surface of \({\mathcal {A}}_{j} ({\tilde{\varvec{F}}})\). In fact, when L2 regularization is not applied and, for example, RBF function parameter is set at \(\beta = 2.0\), the surfaces of the 1st and 2nd POD coefficients are obtained as shown in Fig. 2, which are obviously inadequate to the material behavior. Therefore, the fifth step is devoted to the smoothing the response surface with the help of the L2 regularization combined with the cross-validation accompanied by the Bayesian optimization as described before. Finally, with the determined optimal set of parameters \((\eta ^*, \beta ^*)\), the microscopic stress values at all the calculation points can be represented by the following ROM of microscopic analysis:

Here, since the order of the components of this response vector in Eqn. (40) is defined in accordance with Eqns. (35) and (37), the \(l~(:=(e-1)\times N_g + k)\)-th microscopic 1st PK stress tensor (in Voigt form ) at calculation point *k* in element *e* in a unit cell’s FE model is expressed as

where \(\breve{\bullet }\) is a value at each microscopic Gaussian point calculated by the surrogate localization model.

Once the ROM in the form of Eqn. (40) is determined, we obtain the approximate solution of Eqn. (9) in response to an arbitrary macroscopic deformation gradient without carrying out a FE analysis for localization as far as it does not exceed the range preliminarily set in the first step of the offline process. Thus, the ROM in Eqn. (40) is worthy of being called “surrogate localization model” along the lines of mathematical theory of homogenization.

### Online computation

The online process is to solve the macroscopic BVP consisting of Eqns. (1), (11) and (40) that is the surrogate localization model obtained in the offline process along with relevant boundary conditions. According to the homogenization theory presented in the second section, the macroscopic 1st PK stress is computed with Eqn. (12). Substituting the surrogate localization model (41) to Eqn. (12) and applying the Gaussian quadrature rule, we obtain the surrogate macroscopic constitutive law as

where \(W_{(e,k)} \) is the weight at integration point *k* multiplied by the Jacobian associated with area transformation for element *e*.

For the implicit solution method with the Newton-Raphson iterative procedure in solving the macroscopic BVP, the tangent moduli \(\frac{\partial \tilde{{\varvec{P}}} ({\tilde{\varvec{F}}} )}{\partial {\tilde{\varvec{F}}}}\) are needed. Since the surrogate macroscopic 1st PK in Eqn. (42) is an explicit function of the macroscopic deformation gradient \(\tilde{{\varvec{F}}}\), the 1st tangent moduli tensor can easily be computed as

It should be noted that \(\tilde{{\varvec{P}}} ({\tilde{\varvec{F}}})\) and \(\frac{\partial \tilde{{\varvec{P}}} ({\tilde{\varvec{F}}} )}{\partial {\tilde{\varvec{F}}}}\) respectively in Eqns. (42) and (43) can be easily obtained by computing the unit cell volume/area average of the POD bases in advance as follows:

Here, in view of Eqns. (39) and (27), the derivatives of the POD coefficients are evaluated as

The procedure of the proposed \({\text {FE}}^r\) is summarized in Box 3. While a microscopic BVP, which is governed by (9) (2), (10) and (8), has to be solved at each calculation point of the FE model of a macrostructure in the standard \({\text {FE}}^2\), the evaluation of the macroscopic stresses in response to the macroscopic deformation gradient \(\tilde{{\varvec{F}}}\) is done with the surrogate localization model followed by the numerical integration in Eqn. (42). Thus, exponent ‘2’ in \({\text {FE}}^2\) characterizing the computational cost yields ‘*r*’ that is greater than 1, but much smaller than 2.

## Representative numerical example

A simple, but illustrative numerical example is presented in the two-dimensional (2D) setting. Figure 3 shows the unit cell’s FE model under consideration, which is composed of a single hyperelastic material. To perform micro-macro coupled analyses, we prepared two macrostructures shown in Fig. 4. To verify the performance of the proposed method of \({\text {FE}}^r\), the numerical results are compared with those obtained by the conventional \({\text {FE}}^2\). It should be noted that the volume used in the numerators in Eqns. (42) and (43) to evaluate the volume averaged quantities is replaced by that of the extended domain of the unit cell, \(\Omega ^{0\prime }=\Omega ^0\cup \vartheta \), where \(\vartheta \) is the domain of the void in this particular numerical example.

### Offline computation: microscopic analyses

The following energy function for a compressible neo-Hookean hyperelastic material is employed for the constituent of the unit cell:

where \( {\varvec{C}}\) is the right-Cauchy-Green deformation tensor, \(\mu ,~\lambda \) are Lamé constants, and \(J:=\det {\varvec{F}}\) is the Jacobian to measure volumetric deformation. Specific values of Young’s modulus and Poisson’s ratio are set at \(E={\mu (3 \lambda + 2 \mu )}/{(\lambda +\mu )} = 4000~[{\text {MPa}}]\) and \(\nu ={\lambda } / \{ 2(\lambda +\mu ) \} = 0.25\).

In this study, the components of \(\tilde{{\varvec{F}}}\) in Eqn. (34) are set at \(F^{\mathrm {min}}_{ij} = {\tilde{F}}^{\mathrm {min}}_{c} = 0.9\) and \(F^{\mathrm {max}}_{ij} = {\tilde{F}}^{\mathrm {max}}_{c} =1.5\). Also, \(\gamma _{ij}=0\) is included to the data point and the intervals of \(\gamma _{ij}\) are commonly set to be 0.2. Therefore, \(4^4 + 1 = 257\) patterns of macroscopic deformation gradient are prepared as data points to evaluate the same number of sets of response data vectors, each of which contains microscopic 1st PK stresses at all the calculation points in the unit cell model.

After the series of microscopic analyses, or equivalently, localization, have been carried out, POD is applied to the obtained response data matrix in the form of Eqn. (14). Fig. 5 shows the relationship between the ID’s of POD bases and the calculated contribution rates. As can be seen from this figure, the common logarithm of the calculated contribution rates exponentially decreases for small ID’s, but gradually varies with linear decrease as the POD ID number increases. In this study, the threshold value \(\delta \) in Eqn. (24) is set to be \(10^{-6}\) and then \(N_r (= 46)\) modes are extracted, namely, 46 sets of POD bases and the corresponding coefficients will be utilized to realize the ROM of microscopic analyses . It should be noted that a threshold value, which determines the number of POD modes to be extracted, has to be chosen in consideration of the outcome of reproducibility of the resulting ROM for the original response data. However, we have confirmed that slight difference in number of POD modes from the adopted value little affects the results.

Next, let us obtain continuous functions of the POD coefficients by the application of RBF interpolation and make their function forms be smooth with the aid of L2 regularization that can be accomplished by the combined use of cross-validation and Bayesian optimization as explained before. Specifically, the smoothing parameters \(\eta \) and parameter \(\beta \) in Eqn. (27) are determined to minimize the following error function:

where the vectors of *j*-th POD coefficients, \({\varvec{\alpha }}^{(j)}\) and \({\varvec{{\mathcal {A}}}}^{(j)}\), are weighted by their contribution rates \(\theta _{j}\). Also, \(\varvec{\alpha }^{(j)} \in {\mathbb {R}}^{N_c^{\text {Test}}}\) and \(\varvec{{\mathcal {A}}}^{(j)} \in {\mathbb {R}}^{N_c^{\text {Test}}}\) are the vectors consisting of the original POD coefficients and those of the values of the RBF-interpolated POD coefficient function evaluated at \(N_c^{\text {Test}}\) test data points, respectively. Here, the RBF-interpolated POD coefficient function has been constructed by use of \(N_c^{\text {Training}}\) training data. In addition, \(N_{\mathrm {cv}}\) is the number of division cases for cross-validation and the choice of test and training data is changed for each division case as illustrated in Fig. 1.

By setting \(N_{\mathrm {cv}}=6\), we have obtained the set of parameters as \(\eta ^*=1.3019 \times 10^{-4}\) and \(\beta ^* = 0.5180\), which minimizes the error function (48). Figure 6a shows the convergence history in terms of the error function (48) in the Bayesian optimization process. As can be seen, the process converges quickly to a certain value; about 100 trials seems to be sufficient in this particular case. Figure 6b shows the distribution of errors taken in the cross-validation with the Bayesian optimization process. Here, since the region with small errors is located in the middle of this figure, it is safe to conclude that the determined set of parameters \(\eta ^*\) and \(\beta ^*\), which is indicated by the starred marker in the figure, is optimal. In addition, it has been confirmed that there is little difference when \(N_{\mathrm {cv}} > 6\). To confirm the effectiveness the L2 regularization accompanied by the combined use of cross-validation and Bayesian optimization, let us examine the resulting response surfaces of the POD coefficients with the optimal set of parameters \(\eta ^*\) and \(\beta ^*\). Given the patterns of macroscopic deformation gradient with some components being fixed as

the response surfaces of the POD coefficients with the 1st to 3rd largest contribution rates are shown in Figs. 7a and 8a, respectively, in which the black-colored markers are the discrete values of the original POD coefficients. Furthermore, the undulated response surfaces without L2 regularization are shown in Fig. 7b and 8b, respectively. As can be seen, both of the surfaces have little undulation and are fairly smooth. It should be noted here that the L2-regularized POD coefficient function no longer pass through those points. Nevertheless, this feature has no small effect on the variation of the error with respect to data points and the number of POD bases as will be discussed later.

### Online computation: \({\text {FE}}^r\)

Using the surrogate localization model with the L2-regularized POD coefficient function obtained in the offline process, we solve the macroscopic problems posed in Fig. 4 and compare the analysis results with those obtained by the application of \({\text {FE}}^2\). In Problem A whose conditions are shown in Fig. 4a, the left and bottom edges are fixed in the \(x_1\) and \(x_2\)-directions and a displacement of \(U=6.0[\mathrm {m}]\) is imposed in increments on the right edge in the \(x_1\)-direction. On the other hand, the macrostructure for Problem B, depicted in Fig. 4b, is clamped at the left end subjected to a forced displacement in the \(x_2\)-direction of \(U=21.0[\mathrm {m}]\) on the node at the top-right end.

First of all, let us discuss the computational efficiency of the present approach in comparison with the \({\text {FE}}^2\). The same PC equipped with 16 CPUs of Intel(R) Xeon(R) E5-2687W (3.10GHz) and RAM of 256GB is used for all the computations. The approximately estimated computing times are presented in Table 1, in which the breakdown is provided for the \({\text {FE}}^r\) analyses. For both of the macroscopic analyses, MPI-based parallel computations were carried out. Predictably, \({\text {FE}}^2\) required much more time in evaluating macroscopic stresses than \({\text {FE}}^r\), because each of them was calculated as a volume average of microscopic stress that was calculated through microscopic FE analysis. On the other hand, all the offline computations in the \({\text {FE}}^r\) were carried out with a single CPU, but the computing times presented here are reduced as if they were done with 16 CPUs. Here, the ‘Localization’ row in the \({\text {FE}}^r\) section contains the total time required for \(N_c=257\) cases of microscopic analyses (localization) to construct a response data matrix containing microscopic stresses at all calculation points in the unit cell’s FE model. Also, the ‘Data analysis’ row contains the combined time of the following three processes: 1) POD of the response data matrix, 2) Parameter search with cross-validation and Bayesian optimization, and 3) Calculation of weights for RBF interpolation to determine POD coefficient functions. Here, the computing times for all of these offline computations must be the same, because the construction of the surrogate localization model is made just once for the online computations for both of the macroscopic analyses. Aa can be seen from this comparative table, the proposed approach could reduce about 95% of the computation time necessary for \({\text {FE}}^2\). Nevertheless, it took more time than expected to carry out the offline computations. This is probably due to the fact that the convexity of the corresponding macroscopic energy function is not necessarily guaranteed, even though the microscopic Kirchhoff stress, instead of the 1st PK stress, is adopted to the surrogate localization model. The problem is left unsolved at this time. It should be noted that the dominant factor in total computation time is the number of microscopic analyses \(N_c\), because dense \(N_c \times N_c\)-order matrices have to be dealt with in solving eigenvalue problem in Eqn. (18) and inverse matrix calculation in Eqn. (33). To reduce the computing cost for these, Eqn. (16) is used instead of Eqn. (18) if \(N_s < N_c\). In the propose \({\text {FE}}^r\), however, \(N_c\) is supposed to be much smaller than \(N_s\). This implies that the number of data points must be made small as much as possible. But, at the same time, a complicated unit cell geometry that requires a large number of elements for localization analyses does not have much influence on the computing time put in the ‘Data analysis’ row in Table 1.

Secondly, to verify the performance of the surrogate localization model developed in this study, we compare the results of microscopic analyses at the last step of the \({\text {FE}}^2\) and \({\text {FE}}^r\). Figs. 9 and 10 show the microscopic stress distributions and error maps in the unit cells located in Elements A and B of Problem A, respectively. Here, the error measure for the microscopic 1st PK stress component in each element is defined as

Also, the maximum values in these distributions are provided in Table 2. As can be seen, they are in close coincidence. Thus, it seems reasonable to conclude that the surrogate localization model is capable of reproducing the FE solution of the microscopic problem with a certain degree of accuracy. In other words, the surrogate localization model enables us to carry out the real time simulation of microscopic behavior in response to an an arbitrary macroscopic deformation gradient. Reflecting such a capability, the results of the macroscopic analyses in the \({\text {FE}}^r\) are expected to be in fairly good agreement with those of the \({\text {FE}}^2\).

Requisitely, the third study is made on the comparison between the macroscopic stress responses of \({\text {FE}}^r\) and \({\text {FE}}^2\). Figures 11 and 12 show the macroscopic von-Mises stresses distributions and error maps obtained for Problems A and B, respectively. Here, the error measure for the macrosocpic von-Mises stress is defined as

As can be seen from these figures, the results with the \({\text {FE}}^r\) are almost identical with those of the \({\text {FE}}^2\). Here, it has been confirmed that, in each of the results, the maximum value of the macroscopic deformation gradient falls within the range prescribed above. In order to make quantitative comparison, we show in Figs. 13 and 14 the evolutions of the macroscopic stresses and the corresponding errors in the two specific elements indicated in Fig. 4. Here, the error measure is defined as

where each macrosocpic stress component \({\tilde{\sigma }}_{ij}^{\mathrm {FE}^{*}}\) is the arithmetic mean of the values at four integration points in the target element. As can be seen from the left sides of Figs. 13 and 14, although the stresses obtained in the \({\text {FE}}^r\) gradually deviate from, they are fairly in good agreement. On the other hand, the evolutions of the error, plotted in Figs. 13 and 14, do not exhibit monotonic variations. These fluctuations depending on the level of macroscopic deformation are probably made by the L2 regularization, which allows the response surfaces of the POD coefficients data to deviate from their original values calculated in the POD process. Nevertheless, the errors fall within a certain range for both of the cases, although those tend to be large at the initial stages of deformation due to the small values of stresses at the beginning of the macroscopic analyses.

Finally, the effect of the number of POD bases, \(N_r\), is examined. Conducting both offline and online computations for different values of \(N_r\), we calculate the following cumulated error:

where each macroscopic stress component is the arithmetic mean of the values at four integration points in the target element.

Figure 15 shows the relationships between \(N_r\) and the errors of the stress components with a focus on Element A of Problem B. In general, the accuracy of the standard ROMs constructed by the application of POD is improved as the number of POD bases are increased. However, as can be seen from this figure, the curves reveal no correlations. In other words, accuracy improvement cannot be expected as the number of POD bases is increased. The reason for this is probably due to the fact that different RBF interpolations with L2 regularization are conducted for different number of POD bases and that the optimal set of parameters, \(\eta \) and \(\beta \), are changed accordingly. Although there is some room for further investigation, the fact that the error falls within a certain range can be a favorable feature at least within the scope of the present study.

## Conclusion

We have proposed a method of constructing a POD-based surrogate localization model to perform micro-macro coupled analyses effectively. Since the surrogate localization model replaces microscopic FE analyses in the conventional \({\text {FE}}^2\) method, we have coined \({\text {FE}}^r\) with \(1<r<<2\) accordingly. The response data matrix consists of microscopic 1st PK stresses at all the calculation point in a unit cell’s FE model in response to data points, namely, macroscopic deformation gradients. POD is applied to the response data matrix to make a discrete version of surrogate localization model. Also, to solve macroscopic BVP effectively, POD coefficients have been made continuous with respect to data points by the application of RBF interpolation. Then, the novel contributions are the application of the L2 regularization to the interpolation approximations of POD coefficient with RBFs to smooth the corresponding response surfaces and the combined use of the cross-validation and the Bayesian optimization to search for both the optimal parameter in the RBFs and the optimal smoothing parameter in the L2 regularization.

The capability and promise of the proposed method is demonstrated in the representative numerical examples. The 2D porous unit cell with a hyperelastic material is taken for the sake of simplicity, while two macroscopic problems are considered to examine the performance of the surrogate localization model, which is a ROM of microscopic analysis. It was confirmed that the proposed surrogate model can reproduce the FE solution of the microscopic problem with a certain degree of accuracy. Also, since it can significantly reduce the computational cost of micro-macro coupled analyses in the context of computational homogenization, the resulting method of \({\text {FE}}^r\) can be an alternative to the conventional \({\text {FE}}^2\) method. Since the combined use of the cross-validation and Bayesian optimization made the smoothed response surfaces of POD coefficient functions deviate from the original response data, the variations of analysis errors of \({\text {FE}}^r\) did not reveal a correlation with the number of extracted POD modes. Although harmful effects by this feature did not emerge, further insight into this aspect is left to future work. In addition, the extension of the present method to inelastic deformations remains a major challenge.

## References

Nemat-Nasser S, Hori M. Micromechanics: overall properties of heterogeneous materials. Amsteldam: North-Holland; 1993.

Lions J, Papanicolaou G, Bensoussan A. Asymptotic analysis for periodic structures. New York: North-Holland; 1978.

Sanchez-Palencia E. Non-homogeneous media and vibration theory. Berlin: Springer; 1980.

Lions J. Some methods in the mathematical analysis of systems and their control. New York: Science Press; 1981.

Kanouté P, Boso D, Chaboche J, Schrefler B. Multiscale methods for composites: a review. Arch Comput Methods Eng. 2009;16(1):31–75.

Geers M, Kouznetsova V, Brekelmans W. Multi-scale computational homogenization: trends and challenges. J Comput Appl Math. 2010;234(7):2175–82.

Matouš K, Geers M, Kouznetsova V, Gillman A. A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials. J Comput Phys. 2017;330:192–220.

Geers M, Kouznetsova V, Matous K, Yvonnet J. Homogenization methods and multiscale modeling: nonlinear problems. In: Stein E, de Borst R, Hughes T, editors. Encyclopedia of computational mechanics. Solids and structures. 2nd ed. New York: Wiley; 2017. p. 1–34.

Suquet P. Elements of homogenization theory for inelastic solid mechanics. In: Sanchez-Palencia E, Zaoui A, editors. Homogenization techniques for composite materials. Berlin: Springer; 1987. p. 193–278.

Guedes J, Kikuchi N. Preprocessing and postprocessing for materials based on the homogenization method with adaptive finite element methods. Comput Methods Appl Mech Eng. 1990;83(2):143–98.

Terada K, Kikuchi N. Nonlinear homogenization method for practical applications. Comput Methods Micromech. 1995;212:1–16.

Feyel F. Multiscale \({\text{ FE }}^2\) elastoviscoplastic analysis of composite structures. Comput Mater Sci. 1999;16(1–4):344–54.

Feyel F, Chaboche J. \({\text{ FE }}^2\) multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials. Comput Methods Appl Mech Eng. 2000;183(3–4):309–30.

Terada K, Kikuchi N. A class of general algorithms for multi-scale analyses of heterogeneous media. Comput Methods Appl Mech Eng. 2001;190(40–41):5427–64.

Terada K, Saiki I, Matsui K, Yamakawa Y. Two-scale kinematics and linearization for simultaneous two-scale analysis of periodic heterogeneous solids at finite strain. Comput Methods Appl Mech Eng. 2003;192(31–32):3531–63.

Allaire G. Homogenization and two-scale convergence. SIAM J Math Anal. 1992;23(6):1482–518.

Matsui K, Terada K, Yuge Y. Two-scale finite element analysis of heterogeneous solids with periodic microstructures. Comput Struc. 2004;82(7–8):593–606.

Yvonnet J, Gonzalez D, He Q. Numerically explicit potentials for the homogenization of nonlinear elastic heterogeneous materials. Comput Methods Appl Mech Eng. 2009;198(33–36):2723–37.

Yvonnet J, Monteiro E, He Q. Computational homogenization method and reduced database model for hyperelastic heterogeneous structures. Int J Multiscale Comput Eng. 2013;11(3):201–25.

Le B, Yvonnet J, He Q. Computational homogenization of nonlinear elastic materials using neural networks. Int J Numer Methods Eng. 2015;104(12):1061–84.

Yvonnet J, He Q. The reduced model multiscale method (R3M) for the non-linear homogenization of hyperelastic media at finite strains. J Comput Phys. 2007;223(1):341–68.

Fritzen F, Kunc O. Two-stage data-driven homogenization for nonlinear solids using a reduced order model. Eur J Mech A/Solids. 2018;69:201–20.

Fritzen F, Leuschner M. Reduced basis hybrid computational homogenization based on a mixed incremental formulation. Comput Methods Appl Mech Eng. 2013;260:143–54.

Fritzen F, Marfia S, Sepe V. Reduced order modeling in nonlinear homogenization: a comparative study. Comput Struct. 2015;157:114–31.

Leuschner M, Fritzen F. Reduced order homogenization for viscoplastic composite materials including dissipative imperfect interfaces. Mech Mater. 2017;104:121–38.

Michel J, Suquet P. Nonuniform transformation field analysis. Int J Solids Struct. 2003;40(25):6937–55.

Michel J, Suquet P. A model-reduction approach in micromechanics of materials preserving the variational structure of constitutive relations. J Mech Phys Solids. 2016;90:254–85.

Sepe V, Marfia S, Sacco E. A nonuniform TFA homogenization technique based on piecewise interpolation functions of the inelastic field. Int J Solids Struct. 2013;50(5):725–42.

Fritzen F, Hodapp M, Leuschner M. GPU accelerated computational homogenization based on a variational approach in a reduced basis framework. Comput Methods Appl Mech Eng. 2014;278:186–217.

Fritzen F, Hodapp M. The finite element square reduced (\({\text{ FE }}^2R\)) method with GPU acceleration: towards three-dimensional two-scale simulations. Int J Numer Methods Eng. 2016;107:853–81.

Oskay C, Fish J. Eigendeformation-based reduced order homogenization for failure analysis of heterogeneous materials. Comput Methods Appl Mech Eng. 2007;196(7):1216–43.

Terada K, Kato J, Hirayama N, Inugai T, Yamamoto K. A method of two-scale analysis with micro-macro decoupling scheme: application to hyperelastic composite materials. Comput Mech. 2013;52(5):1199–219.

Watanabe I, Terada K. A method of predicting macroscopic yield strength of polycrystalline metals subjected to plastic forming by micro-macro de-coupling scheme. Int J Mech Sci. 2010;52(2):343–55.

Fleischhauer R, Thomas T, Kato J, Terada K, Kaliske M. Finite thermo-elastic decoupled two-scale analysis. Int J Numer Methods Eng. 2019;2019:8.

Bleier N, Mosler J. A hybrid variationally consistent homogenization approach based on Ritz’s method. Int J Numer Methods Eng. 2013;94(7):625–47.

Lumley J. The structure of inhomogeneous turbulent flows. Atmosph Turbul Wave Propag. 1967;1967:166–78.

Sirovich L. Turbulence and the dynamics of coherent structures. I-Coherent structures. II-Symmetries and transformations. III–dynamics and scaling. Q Appl Math. 1987;10(45):561–71.

Tatsumoto K, Nobuhara M, Tanigawa H, Hirata K. Thermal convection inside an oscillating cube analysed with proper orthogonal decomposition. Mech Eng J. 2015;2(2):15–00018.

Buhmann M. Multivariate cardinal interpolation with radial-basis functions. Construct Approx. 1990;6(3):225–55.

Hoerl A, Kennard R. Ridge regression: biased estimation for nonorthogonal problems. Technometrics. 1970;12:55–67.

Stone M. Cross-validatory choice and assessment of statistical predictions. J R Stat Soc Series B. 1974;36(2):111–47.

Mockus J. On Bayesian Methods for Seeking the Extremum. In: Proceedings of the IFIP Technical Conference. London: Springer; 1974. p. 400–404.

Mockus J. On Bayesian Methods for Seeking the Extremum and their Application. In: IFIP Congress. 1977. p. 195–200.

## Acknowledgements

The authors acknowledge the invitation by the organizers (Professors Paul Steinmann and Andrew McBride) to CMCS 2019 (The international ECCOMAS Thematic Conference) took place on 1-4 October 2019 in Glasgow, UK. Thanks to this conference, their work is invited for the special issue “Computational Modeling of Complex Materials across the Scales (CMCS)”. The authors would like to show their greatest appreciation to Prof. Marc Geers, who was one of the co-chairs of CMCS 2019, for his patient encouragement of our submission.

## Funding

The submission fee is fully covered by CSMA “Computational Structural Mechanics Association”.

## Author information

### Authors and Affiliations

### Contributions

The first four authors (RH, SM, S M and KT) participated in the development of the method, while the last author (JY) read the manuscript thoroughly and provided some valuable comments and suggestions to improve the quality of the manuscript. The corresponding author is responsible for completing it. All authors read and approved the final manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

\(\dagger \)Ryo Hatano, Seishiro Matsubara, Shuji Moriguchi and Kenjiro Terada contributed equally to the manuscipt

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Hatano, R., Matsubara, S., Moriguchi, S. *et al.* \({\text {FE}}^r\) method with surrogate localization model for hyperelastic composite materials.
*Adv. Model. and Simul. in Eng. Sci.* **7**, 39 (2020). https://doi.org/10.1186/s40323-020-00175-0

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s40323-020-00175-0

### Keywords

- Micro-macro coupled analysis
- Proper orthogonal decomposition
- Radial basis function interpolation
- L2 Regularization
- Hyperelastic composites