Skip to main content

Spline-based specimen shape optimization for robust material model calibration

Abstract

Identification from field measurements allows several parameters to be identified from a single test, provided that the measurements are sensitive enough to the parameters to be identified. To do this, authors use empirically defined geometries (with holes, notches...). The first attempts to optimize the specimen to maximize the sensitivity of the measurement are linked to a design space that is either very small (parametric optimization), which does not allow the exploration of very different designs, or, conversely, very large (topology optimization), which sometimes leads to designs that are not regular and cannot be manufactured. In this paper, an intermediate approach based on a non-invasive CAD-inspired optimization strategy is proposed. It relies on the definition of univariate spline Free-Form Deformation boxes to reduce the design space and thus regularize the problem. Then, from the modeling point of view, a new objective function is proposed that takes into account the experimental setup and constraint functions are added to ensure that the gain is real and the shape physically sound. Several examples show that with this method and at low cost, one can significantly improve the identification of constitutive parameters without changing the experimental setup.

Introduction

Along with the design of manufactured materials (composites, architected materials, lattices...) or the development of advanced manufacturing processes comes the need to develop simulation tools capable of predicting the behaviour of complex parts. Although purely model-free data-driven simulation tools have been proposed very recently [1], most of the tools are essentially based on more or less sophisticated constitutive models [2] or their hybridization with artificial intelligence [3]. From an experimental point of view, identifying the possibly large number of parameters of such complex models requires designing several different experiments [4]. Calibrating a single material model can require today carrying out up to dozens of experiments, which makes the overall process very costly and time consuming.

The current trend is to develop procedures that minimize the number of tests ; maximize the benefits of the necessary tests by optimizing them (sample shape, loading path, reduced design of experiments...) ; getting more and richer data from it and better integrate it with finite element simulations. This approach is referred to as smart testing [5]. For instance, it is possible to calibrate as many parameters as possible at once from a very small number of tests thanks to the identification from full-field measurements [6]. Many identification methods have been proposed [7]. Roux et al. [8] reformulated most of them as the minimization of a metric used to measure the distance between measured and computed displacement fields. In this respect, identification can be viewed as data assimilation in mechanics of material. The proposed methodology in this paper will be illustrated with the weighted Finite Element Model Updating (FEMU [9, 10]), but, following [8], it could also be applied with other identification procedures.

To do this, the testing and instrumentation must meet the following two conditions [6]: first, rich instrumentation techniques must be used, such as field measurements, in order to increase the amount of data (i.e. measured quantities); second, the measured quantities must be sufficiently heterogeneous and, more precisely, sensitive to the sought constitutive parameters. To achieve this, instead of standardized (uni-axial) tests, authors considered complex specimens shapes (along with possibly complex loading conditions [11]) to better sample the material response. For instance, authors drilled holes [12, 13], machined different notch shapes [14] or designed cruciform samples [15] with different shapes and fillet radii, to name a few. But the choice of notched geometry, hole radius and position is often made empirically and does not guarantee any form of optimality with regard to the identification of the parameters.

Going further in the optimization of the shape of the specimen and improving the identification of constitutive parameters requires: (a) defining one or more optimality criteria (modeling stage) and (b) building a suitable shape optimization algorithm (solution stage).

Regarding the modeling stage, numerous criteria for the optimality of a specimen have been proposed in the literature. They can be classified into two main families. The first family does not consider any specific constitutive model: it relies on a measure of the heterogeneity of material states. The idea is usually to quantify the ability of a specimen to well sample the behaviour in the principal stresses/strains plane [15,16,17,18] or to target a specific triaxiality [19]. These methods are designed to ensure that the specimen correctly samples the behaviour, which is useful for discriminating between different constitutive models. Although the identification of constitutive parameters can be improved, it is not guaranteed, as the objective function does not measure the accuracy with which parameters are identified.

The second, and more recent, approach consists in optimizing the shape of the specimen with respect to one given constitutive model. The criterion, originally proposed by Bertin et al. [20] is based on the minimization of the covariance matrix of the identified material parameters. It relies on a fine analysis of the propagation of the measurement noise in the identification process [8, 21]. This approach is particularly attractive since it can allow to reduce the identified parameters uncertainty by orders of magnitude. However, let us underline that the modeling is usually based on the sole minimization of a cost function (without constraints) and that the latter is only related to the spectrum of the covariance matrix. As such, there may be competitions between the specimen shape and other experimental settings. For instance, from an experimental point of view, if the overall size of the specimen changes, the field of view of the camera must be adapted so that it covers the entire specimen. This will modify the image resolution and therefore the uncertainty. On the other hand, in the above cited works, the maximum strain can increase during optimization. There is then a competition between modifying the shape of the specimen and increasing the loading level, whereas the latter is not a design variable.

Regarding the solution stage, it has first to be said that most studies do not truly run optimization algorithms but rather use the two above criteria families to compare existing or manually updated designs. Only few authors did rely on automatic optimization algorithms to explore larger design spaces. In this context, most of the proposed optimizations (e.g., [19, 20, 22]) are carried out in very small design spaces, where only geometric parameters such as radii and hole positions are modified. This is generally known as parametric optimization. The advantage with one or two design variables is that a real optimization algorithm is not mandatory, since a graphical or manual optimization may be sufficient. However, this is reduced to very limited variations of the geometry. At the opposite extreme, very large design spaces were considered to explore completely generic designs in [23] where the authors developed a SIMP-type topology optimization algorithm. The method has the advantage of being able to evolve the topology and therefore to create holes or notches if necessary. Yet, there are still many limitations. The design space is very large which results in prohibitive computational costs and may lead to unrealistic irregular shapes. The optimized shape must be redesigned manually in a post-processing phase to improve regularity and/or machinability without considering the effect on optimality nor on the sensitivity fields.

In this paper, it is proposed to follow an intermediate path regarding the design space. Given the highly ill-posed nature of the problem, the spirit is not to determine the ideal specimen (if it exists) but to further improve an existing manually designed or parametrically optimized specimen. Here, the use of a spline-based geometric shape optimization as in [18] is proposed because it allows free-form geometry modifications with unchanged topology. It is possible to keep a low number of design variables and a design sub-space made of regular shapes thanks to spline functions. However, for the sake of having a generic and simple method from an implementation point of view, let us underline that a “classical” Finite Element (FE) mesh is considered for the computation (with a sufficient refinement to obtain good accuracy). A non-invasive CAD-based optimization strategy is therefore developed with the help of univariate spline Free-Form Deformation (FFD) boxes that relate the movement of the FE mesh to spline design variables during the optimization. Formally, the spline design space can be interpreted as a reduced-order space from a FE design space that would be directly associated to the FE mesh, which enables to regularize the optimization problem in a flexible way (see, e.g., [24, 25] for similar ideas in other contexts). Then, from a modeling point of view, the aim is to improve the sensitivity to several constitutive parameters, for the purposes of identifying several parameters at once. It requires to make modifications to the usual formulation of the optimization problem (objective functions and constraints), in particular to take into account the image resolution and the loading magnitude.

This paper is organized as follows: Sect.  “Constitutive parameter identification from DIC quickly” reviews the process of parameter identification, namely the FEMU method and its functional, and the covariance matrix that comes from coupling FEMU to Digital Image Correlation (DIC) and that gives a representation of the uncertainty over the identified constitutive parameters. Sect. “A spline-based regular reduced-order design space” focuses on the non-invasive spline optimization strategy, based on FFD [26] as reduced-order modeling for the design variables, that is implemented to make the method suitable to any possible geometry. Then, Sect. “Appropriate modeling of the specimen shape optimization problem” presents the developed modeling for the specimen shape optimization problem. In particular, our choices are explained regarding the cost function and constraints so that the optimization results are physically meaningful and the obtained geometries are machinable. These choices are illustrated and validated through a simple analytic example. Finally, numerical experiments are conducted in Sect. “Numerical examples” to assess the performance of the methodology on more complex structures with both isotropic and orthotropic linear elastic constitutive relations, and concluding remarks are drawn in Sect. “‘Conclusion’ ’.

Constitutive parameter identification from DIC

In this work, let us recall that the FEMU method was considered for material constitutive parameter identification and, without loss of generality, in the context of 2D-DIC.

FEMU method

The FEMU method consists in comparing a measured quantity to a simulated one, typically a measured displacement field on a specimen obtained with DIC to a displacement field obtained from numerical computation with similar boundary conditions and the chosen material constitutive law [27, 28]. The aim is to find constitutive parameters values that minimize the discrepancy between the simulated field and the measured field. The functional to minimize thus reads:

$$\begin{aligned} F_p({\mathbf {q}}) = \Vert {\mathbf {v}}({\mathbf {q}}) - {\mathbf {u}} \Vert _H ^2 \; , \end{aligned}$$
(1)

where \({\mathbf {v}}({\mathbf {q}})\) and \({\mathbf {u}}\) are vectors gathering the simulated and measured displacements respectively.

Remark In this paper, the analysis focuses on the case of full-field measurement performed by DIC [29, 30]. Simulated displacement fields \({\mathbf {v}}\) generally come from FE software and therefore are expressed at the nodes of a FE mesh. An easy way to compare a measured displacement field \({\mathbf {u}}\) with a simulated one is to seek the measured displacement field in the same FE space as used for the simulation. It is usually referred to as FE-DIC in the community [13, 21, 31,32,33]. Note that there exists many weak [21, 34, 35] or strong [25, 32] regularization techniques which make it possible to perform FE-DIC on arbitrarily fine (let us say analysis-suitable) FE meshes and which remove any limitation for element size (or degree, or type) in the simulation. From now on, \({\mathbf {v}}\) and \({\mathbf {u}}\) will represent the displacement values (DOF) at the nodes of a unique FE mesh (same for simulation and measurement).

Vector \({\mathbf {q}}\) is the set of sought constitutive parameters and \({\mathbf {H}}\) a symmetric positive definite operator such that \(\Vert {\mathbf {a}}\Vert _H^2 = {\mathbf {a}}^T {\mathbf {H}}\; {\mathbf {a}}\). One can imagine that the quality of the results depends on the quality of the measurement. A consequence is that there exists a “best” norm to choose in order to quantify the discrepancy between these two fields, which consists in taking \({\mathbf {H}}\) equal to the inverse of the covariance matrix of the measured quantity \([\mathbf {Cov}^{{\mathbf {u}}}]^{-1}\) [8, 36]. This norm gives to each DOF a weight that is inversely proportional to its uncertainty.

To find \({\mathbf {q}}\) that minimizes \(F_p({\mathbf {q}})\), a Gauss-Newton algorithm was used. Starting from an initial set \({\mathbf {q}}^{(0)}\), each Gauss-Newton iteration k updates the values of the constitutive parameters as follows:

$$\begin{aligned} \forall k \in {{\mathbb {N}}}, \quad {\mathbf {q}}^{(k+1)} = {\mathbf {q}}^{(k)} + \delta {\mathbf {q}}^{(k)} \; . \end{aligned}$$
(2)

\(\delta {\mathbf {q}}^{(k)}\) is computed as the solution of a linear system:

$$\begin{aligned} {\mathbf {H}}_{FEMU}^{(k)} \; \delta {\mathbf {q}}^{(k)} = {\mathbf {b}}_{FEMU}^{(k)} \; ,\quad \text {with} \quad \left\{ \begin{array}{rcl} {\mathbf {H}}_{FEMU}^{(k)} &{}=&{} {\varvec{\nabla }}_q {{\mathbf {v}}^{(k)}} \; {\mathbf {H}} \; \varvec{\nabla }_q {{\mathbf {v}}^{(k)}}^T \\ {\mathbf {b}}_{FEMU}^{(k)} &{}=&{} \varvec{\nabla }_q {{\mathbf {v}}^{(k)}} \; {\mathbf {H}} \; \big ( {\mathbf {u}} - {{\mathbf {v}}^{(k)}}) \big ) \end{array} \right. \; , \end{aligned}$$
(3)

where \({\mathbf {v}}^{(k)}={\mathbf {v}}({\mathbf {q}}^{(k)})\) and \(\varvec{\nabla }_q\) is the gradient with respect to the sought constitutive parameters, i.e. line i is \(\frac{\partial }{\partial q_i}\), the sensitivity field to parameter \(q_i\). Note that the normalization of the constitutive parameters is of utmost importance for such analysis.

Noise propagation in FE-DIC

DIC aims at measuring a displacement field \({\mathbf {u}}\) from the comparison of a reference state image \({\mathcal {I}}\) with a deformed state image \({\mathcal {J}}\). Errors can arise from any step of the DIC procedure (speckle pattern quality, light, air heat gradient, camera calibration, camera noise, subpixel interpolation, displacement field numerical approximation...). In this paper, the optimality will be based on noise uncertainty and will omit other possible biases.

The effect of image noise on displacement uncertainty (or on its covariance) can be characterized explicitely in DIC. Indeed, following [20, 31, 33], \({\mathcal {I}}\) and \({\mathcal {J}}\) are assumed to be independently affected by white noise of variance \(\gamma ^2\) or equivalently that only \({\mathcal {J}}\) is affected by white noise of variance \(2 \gamma ^2\). In this context, it can be shown that the covariance of the measured displacements is such that [20]:

$$\begin{aligned}{}[\mathbf {Cov}^{{\mathbf {u}}}]= 2 \gamma ^2 {\mathbf {H}}_{DIC}^{-1}\; . \end{aligned}$$

Operator \({\mathbf {H}}_{DIC}\) is the approximation of the Hessian of the DIC functional which is a cost-free output of the DIC problem and which has the following form [37]:

$$\begin{aligned} {\mathbf {H}}_{DIC} = \int _I {{\mathbf {N}}}^T {\varvec{\nabla }} {\mathcal {I}}\; \varvec{\nabla } {\mathcal {I}}^T {\mathbf {N}}\; dx \; , \end{aligned}$$
(4)

where I is the Region Of Interest (ROI) in the images and \({\mathbf {N}}\) the matrix gathering the FE shape functions.

Noise propagation in FEMU

From (3), the covariance of the identified constitutive parameters reads [20, 21]:

$$\begin{aligned}{}[\mathbf {Cov}^{{\mathbf {q}}}]= {\mathbf {H}}_{FEMU}^{-1} = 2 \gamma ^2 \; \Big ( \varvec{\nabla }_q {{\mathbf {v}}} \; {\mathbf {H}}_{DIC} \; \varvec{\nabla }_q {{\mathbf {v}}}^T \Big )^{-1} \; , \end{aligned}$$
(5)

where \(\varvec{\nabla }_q {{\mathbf {v}}}\) is the sensitivity field with respect to a set of parameters that should be close to the actual ones (see beginning of Sect. “Computation of the sensitivity fields w.r.t. constitutive parameters” for further details). This covariance matrix \([\mathbf {Cov}^{{\mathbf {q}}}]\) is of size \(n_q \times n_q\) with \(n_q\) the number of sought constitutive parameters. It contains helpful information in order to derive a representative criteria of a “good” constitutive parameter identification [20].

A spline-based regular reduced-order design space

Before detailing the developed modeling of the specimen shape optimization problem (i.e., cost function + constraints), let us introduce here the strategy implemented in terms of design space. From a general point of view, it is necessary to (i) choose a suitable and flexible design space to parametrize the shape modifications and (ii) build an efficient technique to update the FE mesh when the geometry evolves. Let us remind that the method relies on a unique FE mesh which is used for both the simulation and the measurement. It is this mesh that describes the geometry of the sample and its updating.

Design space using spline FFD

As stated in the introduction, our objective in this work is to further improve the shape of a manually designed or a parametrically optimized specimen. Thus, no topology optimization will be performed. Such strategies proved to be useful in the field of structural optimization [38,39,40,41,42] since they offer a very large design space. Conversely, they usually imply prohibitive computational costs with numerous remeshing steps and may require specific regularization procedures to obtain realistic shapes. In our case of an optimization problem related to the minimization of the identification uncertainty, the last point appears even more challenging (see [23]). Alternatively, only shape optimization will be carried out in this work. In this context, the displacements of the mesh nodes could be directly used as design variables. The literature, once again in the field of structural shape optimization, shows that it corresponds to excessively large design spaces, which may lead to irregular shapes that inherits the \(C^0\) properties of FE basis functions without additional smoothing filters [43,44,45].

To circumvent these issues, a more regular spline space was used as a search space, as performed originally in so-called CAD-based structural shape optimization [43, 46], and more recently in isogeometric shape optimization [47,48,49,50,51,52]. The spline functions are well suited for shape optimization since they have been built for geometric modeling in CAD. More precisely, they are of higher regularity and thus imply few DOF associated to control points positions. They can conveniently describe a geometry and, more importantly, a geometry evolution. For specimen shape optimization in our context, let us note that splines were also used in [18], where the obtained spline geometry was given as an input to Abaqus for each computation in the optimization process. Such a technique may require a lot a remeshing steps, especially when estimating sensitivities.

Standard FFD

Our goal is to take advantage of the spline properties but also to use them in a non-invasive way so that usual FE simulation software can be called upon without remeshing. To do so, it is proposed to rely on the FFD concept that was first introduced in [26] in the field of computer graphics and later applied for shape optimization in many contexts [25, 53,54,55,56]. In some sense, the FFD technique allows to build the spline space as a vector subspace of the FE vector space [25]. More precisely, FFD consists in embedding the initial FE geometry into a (usually) spline morphing box. Therefore the deformation of the FE mesh during the shape update can be constrained to follow the deformation of the morphing box, the latter being related to the movement of its control points. More precisely, the deformation of the morphing box is applied to the nodes of the FE mesh; that is, each FE node is prescribed to move as the point at the corresponding location inside the morphing box. Denoting by \({\mathbf {s}}_{fe}\) and \({\mathbf {s}}_{cp}\) the vectors collecting the FE design variables (i.e. the movement of the FE nodes) and spline design variables (i.e. the movement of the box control points), respectively, it can thus be written:

$$\begin{aligned} {\mathbf {s}}_{fe} = {\mathbf {C}}_{FFD}^T\; {\mathbf {s}}_{cp} , \end{aligned}$$
(6)

where \({\mathbf {C}}_{FFD}\) is the FFD operator that gathers the evaluation of the spline functions at the FE nodes of the mesh, see [25]. This method is interesting because it decouples the design space (spline-based) from that of the geometry description (FE-based).

Not all DOF associated to each control point of the morphing box are necessarily considered as independent design variables. A linear operator \({\mathbf {C}}_{s}\) can be defined to group some of the design variables in \({\mathbf {s}}_{cp}\):

$$\begin{aligned} {\mathbf {s}}_{cp} = {\mathbf {C}}_s\; {\mathbf {s}}, \end{aligned}$$
(7)

which leads to consider vector \({\mathbf {s}}\) that gathers the truly independent design variables. Recapitulating, the link between the design variables \({\mathbf {s}}\) and \({\mathbf {s}}_{fe}\) can be written as follows:

$$\begin{aligned} {\mathbf {s}}_{fe} = {\mathbf {C}}_{update}\; {\mathbf {s}}\qquad \text {with} \qquad {\mathbf {C}}_{update} = {\mathbf {C}}_{FFD}^T \; {\mathbf {C}}_{s}, \end{aligned}$$
(8)

which exhibits the reduced-order treatment performed in terms of design. At this stage, let us mention that, in opposition to classic reduced basis methods in simulation, \({\mathbf {C}}_{update}\) is sparse.

Univariate spline-based FFD

Following the common practice in shape optimization with FFD, a first path would be to embed the whole specimen FE mesh into a single bivariate FFD morphing box [25]. The main advantage is that it would be possible to update, up to a certain extent, the whole mesh directly with control points. The problem is that it introduces unnecessary control points in volume which can distort the mesh without changing the shape. Alternatively, it is proposed to create, in this work, one univariate morphing box for each geometrical feature edge (circle, notch, line...) that is to be optimized. Therefore, the morphing box, which is a curve in 2D has to conform to the initial geometry of the feature edge it controls.

Similarly to Eq. (6), we end up with a FFD matrix \({\mathbf {C}}^{e}_{FFD}\) linking the FE design variables \({\mathbf {s}}^e_{fe}\) that move the edge (i.e. not all the FE mesh nodes) to the design variables \({\mathbf {s}}_{cp}^{e}\) associated to the control points of the spline FFD curve:

$$\begin{aligned} {\mathbf {s}}^e_{fe} = \left( {\mathbf {C}}^{e}_{FFD}\right) ^T\; {\mathbf {s}}_{cp}^{e}. \end{aligned}$$
(9)

From a practical point of view, this variant requires knowing the location of the FE nodes in the parametric space of the spline FFD curve. This may imply inverting the spline curve mapping (the parametric space is no longer the same as the physical space as with simple bivariate boxes). For an easiest treatment, the edge nodes were directly defined in the parametric space of the FFD curve and mapped onto the physical space using the spline transformation before generating the FE mesh. Further details will be given in Sect. “Numerical examples” (see, in particular, Fig. 9).

With such a choice, all control points have an equivalent influence on the edges they control. This choice avoids the condition number related issues (if a volume control points has a slight influence on the geometry) as in [25]. Moreover, refinement of the box does not lead to extra treatment if needed. However, the drawback is that only a subset of the FE nodes is controlled by the modification of the morphing boxes. An efficient technique to update the FE mesh in the bulk, given an evolution of the features edge nodes through \({\mathbf {s}}^e_{fe}\) needs to be built.

Mesh updating strategy

In order to avoid remeshing during optimization, it is proposed to use a mesh morphing technique that allows to propagate inside the domain a variation of the edge geometry through \({\mathbf {s}}^e_{fe}\). To do so, a simple linear elastic boundary value problem is defined with a unitary elastic modulus \(E=1\) and a zero Poisson ratio. The deformation of the edges is considered as a prescribed displacement on the corresponding nodes, and the goal is to obtain the resulting displacement \({\mathbf {s}}_{fe}\) of all FE nodes, which is the sought mesh correction field [57, 58]. In practice, a static condensation is performed which leads to the morphing matrix \({\mathbf {C}}_m\) such that:

$$\begin{aligned} {\mathbf {s}}_{fe}={\mathbf {C}}_{m} \; {\mathbf {s}}^e_{fe} \qquad \text {with} \qquad {\mathbf {C}}_{m} = \begin{bmatrix} I \\ - \left( {\mathbf {K}}^{{m}}_{rr}\right) ^{-1} {\mathbf {K}}^{m}_{r{e}} \end{bmatrix} \; , \end{aligned}$$
(10)

where \({\mathbf {K}}^{m}_{rr}\) and \({\mathbf {K}}^{m}_{re}\) are the sub-matrices of the stiffness matrix \({\mathbf {K}}^{m}\) of the morphing problem and indices e and r define the DOF of the edge nodes and of the remaining nodes, respectively. The inverse of operator \({\mathbf {K}}_{rr}^{m}\) is obviously not explicitly computed. Only a LU factorization is performed and an efficient multi right-hand side solver is used to compute \({\mathbf {C}}_m\).

From all these steps, a unique matrix \({\mathbf {C}}_{update}\) can be defined that reduces the initial design space related to the movement of all FE nodes to a low-dimensional design subspace associated to the sole independent edge design variables collected into \({\mathbf {s}}^e\):

$$\begin{aligned} {\mathbf {s}}_{fe} = {\mathbf {C}}_{update}\; {\mathbf {s}}^e \qquad \text {with} \qquad {\mathbf {C}}_{update} = {\mathbf {C}}_{m} \; \left( {\mathbf {C}}^{e}_{FFD} \right) ^T \; {\mathbf {C}}^{e}_{s}. \end{aligned}$$
(11)

Obviously, operator \({\mathbf {C}}^{e}_{s}\) is the counterpart of \({\mathbf {C}}_s\) in (7) for edge design variables. Once again, this composition of operators allows an explicit link between (few) edge design variables and the movement of all mesh nodes. Figure 1 illustrates the transformations involved by the different operators. Starting from any FE mesh, it defines a reduced design subspace, similarly to the reduced basis methods in computational mechanics which performs a projection onto a reduced approximation subspace. This operator is assembled once before starting the optimization solver.

Fig. 1
figure 1

Illustration of the mesh updating strategy with minimal remeshing: (top-left) design variables \({\mathbf {s}}^e\) defined as a subset of edge control points DOF; (top-right) edge control points displacement \({\mathbf {s}}^e_{cp} = {\mathbf {C}}_s^e \; {\mathbf {s}}^e\) ; (bottom-left) edge FE nodes displacement \({\mathbf {s}}^e_{fe} = ({\mathbf {C}}_{FFD}^e)^T \; {\mathbf {s}}^e_{cp}\) ; (bottom right) all FE nodes displacement \({\mathbf {s}}_{fe} = {\mathbf {C}}_{m} \; {\mathbf {s}}^e_{fe}\)

Control of mesh distortion

A typical issue to avoid during shape update is the intersection of an edge with itself or with another edge. This issue can happen if control points associated to design variables \({\mathbf {s}}^e\) cross each other. It results in geometries that have no physical meaning, and some mesh elements can be totally or partially flipped as exemplified in Fig. 2. The elastic morphing can also lead to excessive distortion of the mesh elements. Indeed, this morphing is based on the resolution of an unphysical boundary value linear elastic problem that could lead to strain smaller than -1 in some elements, thereby making them flip. Other types of morphing could also be used [58].

Fig. 2
figure 2

Non permissible geometry, due to the loop made by the upper edge

Excessive element distortion results in a small transformation Jacobian determinant whereas it becomes negative for flipped elements. Therefore, a constraint function was defined that helps keeping this quantity above a certain positive threshold \(\epsilon _{Jac}\) at each Gauss point throughout the optimization process. Besides, this quantity is already computed to obtain the stiffness matrix needed for the cost function, so it adds a marginal computational cost. For computation purposes, the constraint function is normalized with the initial values of the transformation Jacobian determinant. The constraint reads:

$$\begin{aligned} \forall k \in [1..n_{pg}], \quad \frac{\left( \det ({\mathbf {J}}({\mathbf {s}}^e))\right) _k}{\left( \det ({\mathbf {J}}_0)\right) _k} \ge \epsilon _{Jac} \; , \end{aligned}$$
(12)

where \(n_{pg}\) is the number of Gauss points in the FE model, \(\left( \det ({\mathbf {J}})\right) _k\) is the transformation Jacobian determinant at Gauss point k, \(\left( \det ({\mathbf {J}}_0)\right) _k\) is the initial transformation Jacobian determinant at Gauss point k, and \(\epsilon _{Jac}\) is the chosen threshold.

Remark

If at convergence of the optimization algorithm, this constraint is active, it means that the morphing may have distorted the mesh too much. This is a good indicator that allows us to remesh the domain with the new shape and restart a new optimization phase [58].

Appropriate modeling of the specimen shape optimization problem

Now that the design space has been defined, the proposed modeling approach to formulate the original shape optimization problem is presented. The cost function used today in the field are first reviewed (see, e.g., [20, 22, 23]). They are based on the knowledge on noise propagation in the identification workflow recalled in Sect. “Constitutive parameter identification from DIC”. They all aim at optimizing the shape of a specimen to best identify the constitutive parameters of a specific model. Then, the optimization problem is improved and complemented with constrains in order to get a physically sound geometry. An example with a simple tension beam is presented at the end of this section to validate our modeling choices.

Construction and evaluation of the cost function

As recalled in Sect. “Constitutive parameter identification from DIC”, the covariance matrix provides an estimation of the quality of the identified parameters [20, 22, 23]. Improving the quality of the identification can thus be achieved by “minimizing” this covariance matrix that expresses the uncertainty on the constitutive parameter values, i.e. by “minimizing” \([\mathbf {Cov}^{{\mathbf {q}}}]({\mathbf {s}}^e) = {\mathbf {H}}_{FEMU}^{-1} ({\mathbf {s}}^e)\) (see Eq. (5)) with respect to design variables \({\mathbf {s}}^e\) that modify the geometry of the specimen. This section defines what “minimize” means for a matrix.

Physical meaning of the eigenvalues of the covariance matrix

The idea proposed in [20, 22, 23] is to work on the eigenvalues of the covariance matrix. These eigenvalues have a physical meaning when looking at the multivariate normal distribution associated to the covariance matrix \([\mathbf {Cov}^{{\mathbf {q}}}]\). In fact, the isosurface where \({\mathbf {q}}^T [\mathbf {Cov}^{{\mathbf {q}}}]\; {\mathbf {q}}\) equals 1 is an ellipsoid where each principal semi-axe direction is given by an eigenvector of \([\mathbf {Cov}^{{\mathbf {q}}}]\), and their size is the square root of the associated eigenvalue. An illustration is shown with 2 variables on Fig. 3.

Fig. 3
figure 3

Bivariate normal distribution associated with variances 1. and 2. and covariance 0.5. Eigenvalues are \(\Lambda _1=0.79\) and \(\Lambda _2=2.21\), associated to eigenvectors \([-0.92, 0.38]\) and [0.38, 0.92]. In black, the ellipse with semi-axes of lenghts \(\sqrt{\Lambda _1}\) and \(\sqrt{\Lambda _2}\)

To minimize the uncertainty on the identified constitutive parameters, this ellipse should be as small as possible, meaning that \([\mathbf {Cov}^{{\mathbf {q}}}]\) eigenvalues should be as small as possible. These eigenvalues also correspond to the inverse of the FEMU functional curvatures near the optimum set of constitutive parameters, since \([\mathbf {Cov}^{{\mathbf {q}}}]= {\mathbf {H}}_{FEMU}^{-1}\) and \({\mathbf {H}}_{FEMU}\) is an approximation of the Hessian matrix of \(F_p\) (see Eq. (1)). With this point of view, decreasing \([\mathbf {Cov}^{{\mathbf {q}}}]\) eigenvalues improves the FEMU functional convexity, which also translates in a better confidence in the identified constitutive parameters.

Defining a criterion over eigenvalues

In order to perform an optimization of the constitutive parameter identification procedure, we need to derive a unique scalar criterion from \([\mathbf {Cov}^{{\mathbf {q}}}]\) eigenvalues. Several choices exist.

Feld et al. [22] minimize the ratio of the largest eigenvalue over the lowest. This choice improves the conditioning of \([\mathbf {Cov}^{{\mathbf {q}}}]\) and also \({\mathbf {H}}_{FEMU}\). Hence, the numerical errors are reduced during the Gauss-Newton FEMU minimization. However, it does not necessarily enhance the sensitivity to the sought parameters. Indeed, an increase of both eigenvalues can lead to a decrease of this ratio, which means that the optimized experiment may be less sensitive to the sought constitutive parameters. Graphically, with the example of Fig. 3, this criterion leads to an ellipse looking more like a circle, but it does not affect the size of that circle.

Bertin et al. [20] and Chamoin et al. [23] suggest that the determinant of \([\mathbf {Cov}^{{\mathbf {q}}}]\) can be used as a criterion—they call it the uncertainty volume. In this case, all eigenvalues are contained equally in the criterion. But once again, it does not necessarily enhance the sensitivity to all constitutive parameters. As a matter of fact, the optimization procedure can lead to decreasing some eigenvalues and increasing others at the same time. In this case, parameters that were not well identified with the initial experiment can be even less well identified in the optimized experiment. Graphically, with the example of Fig. 3, this choice results in an ellipse with a smaller surface. Yet it does not prevent the ellipse from getting thinner and longer.

Bertin et al. [20] and Chamoin et al. [23] minimize the largest \([\mathbf {Cov}^{{\mathbf {q}}}]\) eigenvalue. In this case, only the worst parameter sensitivity matters. This choice ensures that each constitutive parameter will be identified with an uncertainty that will not be greater than the initial maximum uncertainty. Graphically, this choice makes the circumscribed circle of the ellipse smaller. The shape of the ellipse can vary but its largest semi-axis is necessarily smaller at the end of the optimization than the largest initial semi-axis. In the same spirit, the optimization problem will consist in minimizing the largest eigenvalue of \([\mathbf {Cov}^{{\mathbf {q}}}]\) or equivalently to maximize the smallest eigenvalue of \({\mathbf {H}}_{FEMU}\):

$$\begin{aligned} {\mathbf {s}}^{e^\star }= & {} \underset{{\mathbf {s}}^e}{\arg \min } \; \lambda _{\max } \left( [\mathbf {Cov}^{{\mathbf {q}}}]\right) \; , \end{aligned}$$
(13)
$$\begin{aligned}= & {} \underset{{\mathbf {s}}^e}{\arg \min } \; \frac{1}{\lambda _{\min } \left( {\mathbf {H}}_{FEMU} \right) } \; , \end{aligned}$$
(14)
$$\begin{aligned}= & {} \underset{{\mathbf {s}}^e}{\arg \min } \; \frac{2\gamma ^2}{\lambda _{\min } \Big ( \varvec{\nabla }_q {{\mathbf {v}}} \; {\mathbf {H}}_{DIC} \; \varvec{\nabla }_q {{\mathbf {v}}}^T \Big )}\; . \end{aligned}$$
(15)

The objective function to minimize, scales with the image noise variance \(\gamma ^2\). One way to improve parameter identification from full-field measurement thus consists in developing alternative DIC algorithms which are less sensitive to image noise [59, 60]. The noise being independent on the sample shape, it will not be taken into account in the sequel.

Computation of the sensitivity fields w.r.t. constitutive parameters

First, in order to compute the sensitivity fields \(\varvec{\nabla }_q {{\mathbf {v}}}\), it is necessary to choose values for the different constitutive parameters of the selected constitutive law, because of the need to know \({\mathbf {v}}({\mathbf {q}})\). These parameters are defined numerically before the optimization process and are not modified during the optimization. Hence, it may be preferable to choose values that are not too different from those expected.

Since we aim at identifying linear elastic constitutive parameters, it is possible to compute the derivative analytically for the simulated field with respect to a given parameter \({q_i}\). Indeed, \({\mathbf {v}}\) comes from a FE static problem resolution \({\mathbf {K}} {\mathbf {v}} = {\mathbf {f}}\), where the stiffness matrix \({\mathbf {K}}\) depends on \({\mathbf {q}}\) and the applied load \({\mathbf {f}}\) does not. The derivative reads:

$$\begin{aligned} {\mathbf {K}},_{{q_i}} {\mathbf {v}} + {\mathbf {K}} {\mathbf {v}},_{q_i} = 0 \; , \end{aligned}$$
(16)

which leads to:

$$\begin{aligned} {\mathbf {v}},_{q_i} = - {\mathbf {K}}^{-1} {\mathbf {K}},_{q_i} {\mathbf {v}} \; , \end{aligned}$$
(17)

where \(,_{q_i}\) denotes \(\frac{\partial }{\partial q_i}\). Since the FE basis functions do not depend on \({\mathbf {q}}\), only the Hooke matrix derivative is needed to compute \({\mathbf {K}},_{q_i}\). This derivative can be computed exactly when the constitutive law is linear. \({\mathbf {K}},_{q_i}\) can then be assembled with the same routines as for the stiffness matrix \({\mathbf {K}}\) using the Hooke matrix derivative instead of the Hooke matrix.

Each parameter \(q_i\) is usually normalized with its chosen initial value \({q_{i}^{0}}\) in the FEMU minimization process (to reach a balance of the sensitivities of the different parameters). Let us denote by \({\overline{q}}_i\) the normalized parameter such that \(q_i=q_{i}^{0} {\overline{q}}_i\). As a result, we use \( {\mathbf {v}},_{{\overline{q}}_i}\) instead of \({\mathbf {v}},_{q_i}\), and we have the following relation:

$$\begin{aligned} {{\mathbf {v}},_{{\overline{q}}_i} = q_{i}^{0} \; {\mathbf {v}},_{q_i}.} \end{aligned}$$
(18)

Approximation of the DIC Hessian to take into account the camera’s field of view

The last thing to compute the cost function in (15) is to evaluate the approximation of the Hessian \({\mathbf {H}}_{DIC}\) of the DIC functional. This operator is given as an output of the DIC problem. Its form is detailed in (4). The optimization of the specimen geometry is intended to be carried out before any experiment. And since the speckle pattern is generally not known a priori, the \({\mathbf {H}}_{DIC}\) matrix cannot be computed as such, because \({\mathcal {I}}\) and thus \(\varvec{\nabla } {\mathcal {I}}\) do not yet exist.

Feld et al. [22] proposed to replace \({\mathbf {H}}_{DIC}\) with an identity matrix. In contrast, Bertin et al. [20] and Chamoin et al. [23] recommended the use of a mean-field assumption. This latter assumption consists in considering that the graylevel gradient \(\varvec{\nabla } {\mathcal {I}}\) varies a lot more than the shape functions \({\mathbf {N}}\) in (4). As a consequence, we can make the following approximation:

$$\begin{aligned} {\mathbf {H}}_{DIC} \approx G_{\mathcal {I}}^2 \; \overline{{\mathbf {M}}} \; , \end{aligned}$$
(19)

with

$$\begin{aligned} \overline{{\mathbf {M}}} = \int _I {\mathbf {N}}^T {\mathbf {N}} \; dx \qquad \text {and} \qquad G_{\mathcal {I}}^2 = \frac{1}{|I|} \int _I \left[ \left( \frac{\partial {\mathcal {I}}}{\partial x} \right) ^2 + \left( \frac{\partial {\mathcal {I}}}{\partial y} \right) ^2 \right] dx \; . \end{aligned}$$

\(\overline{{\mathbf {M}}}\) is a pseudo mass matrix and \(G_{\mathcal {I}}\) is a measure of the mean graylevel gradient similar (yet not exactly equal) to the Mean Intensity Gradient (MIG) as defined by Pan et al. [61]. Similarly to the image noise variance term, this factor was removed from the objective function in this study, since its value depends on a speckle pattern which is unknown at this stage and constant with respect to the design variables \({\mathbf {s}}^e\). Note that it is possible to maximize this value by working on the pattern itself regardless of the shape [61, 62].

Now, since the aim is to optimize the shape of the specimen, its overall size may change. Consequently, the experimental set-up may have to be adapted, especially the camera’s field of view. Indeed, if the specimen is larger, the camera has to be placed further away (increase of the distance Z between the object and the camera) or the focal length f must be reduced in order to capture the whole ROI. By doing so, a given physical displacement will lead to a smaller displacement in pixels, given the finiteness of the image definition. To take this effect into account, the \({\mathbf {H}}_{DIC}\) matrix is written by considering world coordinates, in contrast to (19) and [20, 23] where fields are expressed in the pixel coordinates. To this end, the change of variables \(I = p(\Omega )\) is performed in (4), where \(\Omega \) is the physical ROI on the specimen, and p is the camera projector model [63]. To improve readability, \({\mathbf {N}}\) is redefined to be FE shape functions in the physical ROI \(\Omega \), as if \({\mathbf {N}} \circ p^{-1}\) was written instead of \({\mathbf {N}}\) in the previous equations where quantities are defined in the image. \({\mathbf {H}}_{DIC}\) becomes:

$$\begin{aligned} {\mathbf {H}}_{DIC} \approx \frac{G_{\mathcal {I}}^2}{2} \int _\Omega | \det (\varvec{\nabla } p) | \; {\mathbf {N}}^T {\mathbf {N}} \; dX \; . \end{aligned}$$
(20)

A pin-hole camera model (see, e.g., [63]) is considered as a first order approximation of the mapping from a point in the world at coordinates (XYZ) to its projection at coordinates (xy) in the image. It is possible to express \(\varvec{\nabla } p\). Indeed, the pinhole model states:

$$\begin{aligned} p : X, Y \quad \mapsto \quad x = -f_x \frac{X}{Z} + x_0 , \quad y = -f_y \frac{Y}{Z} + y_0 \; , \end{aligned}$$
(21)

where \(f_x\) and \(f_y\), in pixels, are the camera focal sampling parameters along the 2 image directions, Z is the distance between the specimen and the camera, and \(x_0\) and \(y_0\) are the center of the image (in pixels). The projector gradient can then be obtained:

$$\begin{aligned} \varvec{\nabla } p = \begin{bmatrix} - \frac{f_x}{Z} &{} 0\\ 0 &{} - \frac{f_y}{Z} \end{bmatrix} \; . \end{aligned}$$
(22)

\({\mathbf {H}}_{DIC}\) thus reads:

$$\begin{aligned} {\mathbf {H}}_{DIC} \approx \frac{G_{\mathcal {I}}^2}{2} \frac{f_x f_y}{Z^2} \; {\mathbf {M}}({\mathbf {s}}^e) \qquad \text {with} \qquad {\mathbf {M}}({\mathbf {s}}^e) = \int _\Omega {\mathbf {N}}^T {\mathbf {N}}\; dX, \end{aligned}$$
(23)

and where \({\mathbf {M}}\) is a true mass matrix computed in the FE coordinate system. We end up with an additional weighting coefficient which depends on the experimental setting: \(f_x\) (respectively \(f_y\)) depends on the focal length f with a parameter \(k_x\) (resp. \(k_y\)) that is intrinsic to the camera (it corresponds to a number of pixels per meter on the sensor), such that \(f_x=k_x\; f\) (resp. \(f_y=k_y \;f\)) (see, again, [63]). Assuming \(k_x \approx k_y = k\), the DIC Hessian reads:

$$\begin{aligned} {\mathbf {H}}_{DIC} \approx \frac{G_{\mathcal {I}}^2}{2} \frac{k^2f^2}{Z^2} \; {\mathbf {M}}({\mathbf {s}}^e). \end{aligned}$$
(24)

This expression has the advantage of taking explicitely into account camera settings that have an effect on the field of view. It is theoretically possible to increase the sensor definition k to improve identification by considering higher definition sensors. But as this parameter depends on the choice of a camera and is anyway limited to the technological capabilities, it is considered fixed.

So far, we are only looking to improve a test piece, without considering a particular experimental setup. In the following, an equivalent objective function is formulated that does not involve any camera parameters. Indeed, by application of Thales theorem and assuming that pictures are always taken so as to maximize the images resolution on the ROI, the coefficient \(\frac{f^2}{Z^2}\) can be linked to the surface ratio of area of the sensor \(S_0\) to that of the ROI S:

$$\begin{aligned} \frac{f_x f_y}{Z^2} = \frac{k^2 f^2}{Z^2} = k^2 \; \frac{S_0}{S} \; . \end{aligned}$$
(25)

S depends on the specimen geometry. Therefore, it should be in the cost function, in the form of the surface of the rectangle that is tangent to the ROI. Conversely, \(S_0\) being the sensor size, it is considered fixed. Eventually, the DIC Hessian is approximated by:

$$\begin{aligned} {\mathbf {H}}_{DIC} \approx \frac{G_{\mathcal {I}}^2}{2} \frac{k^2S_0}{S} \; {\mathbf {M}}({\mathbf {s}}^e). \end{aligned}$$
(26)

Improvement with constraints to take into account the boundary conditions magnitude

A tension test is considered for the identification of elastic constitutive parameters. In this work, only the shape was optimized. Boundary conditions are fixed during the optimization process.

As mentioned in the introduction, studies of the literature do not limit the maximum admissible strain or stress. As a result, there may be a competition between the loading magnitude and the shape parametrization. Indeed, if the maximum strain can increase during shape optimization, then the question is: which shape is better? (1) the optimized one with a fixed loading or (2) the initial one with an increased loading? This competition is due to the fact that the loading magnitude is not one of the design variables. It seems extremely important to consider the loading level in the optimization problem. In the following, given the linear elasticity context, it is chosen to optimize the shape without changing either the force (or imposed displacement) applied to the specimen or the maximum strain.

Limiting the maximum strain does not prohibit having a specimen which presents a strain concentration. It only allows to separate the improvement due to a better strain distribution within the specimen from that due to the increase of the strain level due to loading magnitude increase. Finally, a model often needs to be calibrated in a certain strain range and this optimization constraint also forces the experiment to properly sample the behavior in the desired strain range.

This equivalent strain can be computed thanks to the displacement field \({\mathbf {v}}\) that can be retrieved from the cost function sensitivity field computations (see (17) where \({\mathbf {v}}\) also has to be computed to get \({\mathbf {v}},_{q_i}\)). Hence, a constraint that takes into account the equivalent strain values on the specimen under loading was added to the optimization problem. The constraint reads:

$$\begin{aligned} \forall k \in [1..n_{pg}], \quad \frac{(\varepsilon _{eq}({\mathbf {s}}^e))_k}{ \varepsilon _{max} } \le 1 \; , \end{aligned}$$
(27)

where \((\varepsilon _{eq})_k\) is the equivalent strain value at the Gauss point k and \(\varepsilon _{max}\) is a given threshold. In this study, the Von Mises equivalent strain was chosen arbitrarily.

Resulting constrained optimization problem

Recapitulating, let us now write the full constrained optimization problem to solve. As stated above, although they could be optimized independently, the external loading magnitude, the image noise \(\gamma \), the image gradient \(G_{{\mathcal {I}}}\), the sensor definition k and the sensor size \(S_0\) are considered fixed since they do not depend on the shape of the object. Consequently, we end up with the following constrained optimization problem to solve:

(28)

In other words, the aim is to find the design variables \({\mathbf {s}}^e\) that minimize the largest eigenvalue of the approximate covariance matrix of constitutive parameters \({\mathbf {q}}\) under the constraint that the Jacobian of the transformation on each Gauss point k remains strictly positive and that the maximum equivalent strain does not increase. A flowchart of the overall optimization process proposed to optimize a specimen shape with respect to identification uncertainty is depicted in Fig. 4.

Fig. 4
figure 4

Flowchart of the proposed spline-based optimization process to improve a specimen shape with respect to identification uncertainty

Analysis of a toy problem

To analyze the cost function, let us consider the analytic optimization of a simple isotropic plain specimen in tension as presented in Fig. 5. An isotropic linear elastic material is chosen, thus involving two parameters: the Young modulus E and the Poisson coefficient \(\nu \). Symmetry Dirichlet boundary conditions are applied at the bottom and right edges, the upper edge remains free, and the left edge is subjected to a uniformly distributed load. F denotes the resultant force, which is kept constant.

Fig. 5
figure 5

Example of the simple isotropic plain specimen in tension: definition of the dimensions of the domain and the applied boundary conditions in the (xy) coordinate system

In this very simple case, a closed form solution can be found when only the height h and the length L of the rectangle are the design variables. The resulting displacement field is the following:

$$\begin{aligned}&u_x(x,y) = \frac{F}{h E} x \; , \end{aligned}$$
(29)
$$\begin{aligned}&u_y(x,y) = - \frac{\nu F}{h E} y \; . \end{aligned}$$
(30)

When only one constitutive parameter is considered, the operator whose smallest eigenvalue is involved in the cost function of (28) is a scalar which consists in integrating the sensitivity field over the ROI:

$$\begin{aligned} {\mathbf {v}}_{,q_i}^T \; {\mathbf {M}} \; {\mathbf {v}}_{,q_i} = \int _\Omega {\mathbf {v}}_{,q_i}^T \; {\mathbf {v}}_{,q_i} \; dx = \int _\Omega \left\| {\mathbf {v}}_{,q_i} \right\| ^2 dx \; . \end{aligned}$$
(31)

Note that in the case of only one single constitutive parameter, normalization (18) was not applied.

Cost function w.r.t. Poisson ratio only

Optimizing the specimen shape through Problem (28) with respect to the Poisson ratio only reads:

$$\begin{aligned} (L,h)^\star = \underset{(L,h)}{\arg \min } \; \frac{3E^2}{F^2 L h} \; \qquad \text {or} \qquad (L,h)^\star = \underset{(L,h)}{\arg \min } \; \frac{3E^2}{F^2} \; , \end{aligned}$$
(32)

without or with the weighting of the cost function by the specimen area \(S=hL\), respectively. Without weighting, the cost function will push the sample thickness h to infinity. This is indeed relevant because the displacements related to the Poisson effect are maximum far from the horizontal symmetry axis. In reality, increasing the thickness h of the specimen will require increasing the field of view by moving back (increasing Z) or adjusting the lens (reducing f), which, given the finiteness of the sensor, does not change the optimality of the specimen. When the field of view of the camera is considered by adding the weight S to the functional, it can be seen that the latter no longer depends on h (or L), which is consistent with the previous analysis.

Cost function w.r.t. Young Modulus only

When the cost function (28) is written with respect to the Young modulus E only, and setting \(\nu =0\), it is defined as follows:

$$\begin{aligned} (L,h)^\star = \underset{(L,h)}{\arg \min } \; \frac{3h^2E^4}{F^2 L^2} \;, \end{aligned}$$
(33)

which clearly illustrates the competition between the thickness h and the resultant of the load F. In this example, optimizing the shape of the specimen (by reducing the height h) is strictly equivalent to increasing the external force F by the same amount. In this case, the intrinsic quality of the geometry is not improved. Worse, the risk is to produce more complex geometries, therefore more expensive to machine, whereas an increase in the load is sufficient. Finally, one could imagine even worse situations where the refinement of the specimen improves the identification less than a simple increase of the loading.

It is then important to consider the magnitude of the loading in the cost function. It was done, here in an elastic context, by limiting the maximum strain to its initial value, which allows to decouple the improvement of the specimen due to an apparent increase of the loading (increase of the strains) and the one due to an improvement of the strain distribution in the specimen at constant maximum strain.

Numerical examples

As a preamble, let us specify that all the numerical experiments were made in a home-made code written in python language and GMSH was used for FE mesh generations. Then, an optimization algorithm relying on gradient-based algorithms as commonly performed in spline-based shape optimization [47,48,49, 51] was chosen. More precisely, the built-in SLSQP function from the library scipy.minimize was used as a black box. It is based on the SQP method [64] that can be viewed as an extension of the Newton method for constrained optimization. At each major iteration, an approximation is made of the Hessian of the Lagrangian function associated to (28) using a quasi-Newton updating method. These methods are appealing because the Hessian is not directly computed but approximated through the gradient variations during the resolution, thereby offering simplicity, minimal computational cost and good convergence properties.

Validation of the optimization procedure on an isotropic plain specimen with respect to one single constitutive parameter

To start with, the tension beam of Sect. “Analysis of a toy problem’ ’ was considered with \(L=200\) mm. Instead of the derivation of an analytic optimization, we perform now the numerical resolution of problem (28). The height h(x) of the specimen is adjusted to minimize the variance of the Young modulus only. It is parametrized by the evolution of the top line which plays the role of the feature. The design space is built from 6 control points of a quadratic morphing box.

Only two design variables were actually chosen in this example. These design variables were chosen so as to keep the left side of the top edge horizontal, as shown in Fig. 6. To do so, a \(C^0\) line was added at \(x=100\) mm by repeating a knot. \(C^1\)-continuity was then ensured by imposing some control points to move like their neighbors, hence enforcing tangential directions [51]. Exceptionally in this example, to verify that the algorithm is robust, the initial shape was set such that \(h(x)=100\) mm, which corresponds to an homogeneous strain 6 times smaller than the maximum strain allowed. Otherwise the initial shape would already be optimal and the convergence of the algorithm could not be analysed.

Fig. 6
figure 6

Example of the isotropic plain specimen: definition of the two design variables. First design variable: displacement along y of the 4 left control points. Second design variable: displacement along y of the 2 right control points. A \(C^0\) line was added at \(x = 100\) mm. In light gray, the initial shape (\(h(x)=100mm\)) and; in dark grey, the result of the optimization process (\(h_{opt}(x)=16.67mm\))

Fig. 7
figure 7

Example of the isotropic plain specimen with only two spline-based design variables: evolution of the cost function value and geometry with the iteration number of the optimization process

Figure 7 shows the results with the optimization algorithm. As one would expect, the optimal shape is still a rectangle but the height is devided by 6. This simple example allows to validate the expression of the cost function as well as the numerical FFD-based procedure for updating the geometry. The optimal value of the cost function is divided by one and a half orders of magnitude (divided by 36) which is consistent with the analytic expression of (33).

Remark

Let us notice that at the first iteration the objective function is lower than its value at convergence. However, the algorithm used (which is based on the SQP method) may not lead to feasible solutions at all iterations in the sense that the constraints may not be satisfied at each iteration. This may explain why the algorithm did not stop with this first solution.

A more complex example: open hole orthotropic specimen optimization with respect to several constitutive parameters

The developed methodology was next applied to a more complex problem. For demonstration purpose, let us investigate the example depicted in Fig. 8. This problem, inspired from [22], consists in optimizing an orthotropic specimen in tension in order to minimize the uncertainty with respect to the elastic parameters. In [22], the position of the holes was already optimized. The study presented here goes further by improving the geometry of the two holes in addition to their position. Note that this choice of limiting the analysis to the holes shape is arbitrary; obviously, the optimization of the outer edge of the specimen could have also been considered with our methodology, as in Sect. “Validation of the optimization procedure on an isotropic plain specimen with respect to one single constitutive parameter” . In this example, two quadratic periodic univariate spline boxes (1D-FFD) were used to optimize the hole shapes as shown in Fig. 9. Note that the position of the FE nodes in the parametric space of the spline curves is denoted by \(\xi ^{FE}\).

Fig. 8
figure 8

Example of the orthotropic specimen with two holes: boundary condition definition, initial geometry to be optimized and corresponding finite element mesh (inspired from [22])

Fig. 9
figure 9

Definition of the design space for the orthotropic specimen with two holes: 1D-FFD morphing box (left: parametric space; right: physical space) with its control points (black points) and position of the controlled FE nodes (blue points)

Similarly to [22] and to reduce the number of design variables in this specific case, the specimen geometry was treated so as to keep the central symmetry. It divides the number of design variables by a factor 2. This symmetry can be expressed as a special form of reduction matrix \({\mathbf {C}}^{e}_{s}\), and the matrix \({\mathbf {C}}_{update}\) can be defined to map the design variables \({\mathbf {s}}^e\) to the modification of all FE nodes position \(s_{fe}\) (see (11)).

Optimization w.r.t. to orthotropic linear elastic parameters

The goal is to improve this geometry with respect to its sensitivity to orthotropic linear elastic constitutive parameters: the longitudinal Young modulus \(E_1\), the transverse Young modulus \(E_2\), the shear modulus \(G_{12}\) and the Poisson ratio \(\nu _{12}\). Similarly to [22], Dirichlet boundary conditions were prescribed and no other information other than the displacement field was used. As a consequence, the Young modulus and the shear modulus cannot be identified as such. Therefore, the sample is optimized with ratios \(\frac{E_2}{E_1}\), \(\frac{G_{12}}{E_1}\), used together with \(\nu _{12}\), as constitutive parameters of interest in this example. The strain constraint is expressed as in (27) with \(\varepsilon _{max} = \underset{k}{\max } (\varepsilon _{eq}({\mathbf {s}}^e_0))_k\).

First optimization phase The optimization leads to the geometry shown in Fig. 10. The optimal shape is superimposed on the initial shape and the deformed morphing box (along with the optimal position of the control points) is also represented in red. It is interesting to see that the size of the holes has increased without increasing the maximum equivalent stress. In fact, the area of stress concentration has expanded. Fig. 11 shows the evolution of the cost function. It can be seen that it is possible to improve the cost function value by a factor 10 without reaching higher equivalent strain values. This means that the uncertainty on the worst identified parameters was reduced by more than 3 with the very same experimental set-up.

Fig. 10
figure 10

Improved geometry (in black) compared to the initial geometry (in grey), and 1D-FFD morphing box and its control points position at the end of the first optimization (in red)

Fig. 11
figure 11

Example of the orthotropic plate with two holes: evolution of the cost function value and specimen shape as a function of the iteration number in the first optimization run

Fig. 12
figure 12

Example of the orthotropic plate with two holes: evolution of the transformation Jacobian of each Gauss point as a function of the iteration number for the first optimization run (\(\epsilon _{Jac} = 0.1\))

Next, in Fig. 12 the absolute value of the determinant of the transformation Jacobian of each Gauss point is plotted against the iteration number. The constraint was set with \(\epsilon _{Jac}=0.1\). It can be noticed that at the end of the optimization process, the Jacobian constraint is active for some Gauss points, that is to say, the determinant of the jacobian is exaclty equal to 0.1. This is not due to purely geometrical issues such as loops on the edges, but it comes from the used mesh morphing technique that propagated the edges deformation to the rest of the FE nodes (mesh morphing step). Indeed, when the edges deformation is too large, using an elastic morphing can lead to flipped (or highly warped) elements, even if the specimen edges remain physically sound (see Fig. 13). In this situation, it means that the shape morphing technique used to limit the computational cost actually restricts the shape evolutions. There is thus no other choice but performing a remeshing step and restarting the optimization.

Fig. 13
figure 13

Flipped elements when the deformation of the edges is too important (circled in blue)

Second optimization phase As mentioned in a remark at the end of Sect. “A spline-based regular reduced-order design space”, a solution is then to remesh the interior of the specimen at the end of the optimization process, and to run another shape optimization with this new mesh. By keeping the same nodes as for the old mesh on the feature edges, in particular for the optimized holes, the same FFD matrix \({\mathbf {C}}_{FFD}\) as for the first optimization process can be kept. Hence, only the morphing operator \({\mathbf {C}}_{m}\) is to be re-assembled.

The results obtained after the second optimization step are shown in Figs. 14 and 15. With the new mesh, it is possible to reach higher free form deformations of the holes and obtain larger holes. During this step, the cost function value is reduced by a factor 3, which brings the overall reduction factor more than 30.

Fig. 14
figure 14

Example of the orthotropic plate with two holes: improved geometry after the second optimization run (in black) compared to the optimal shape obtained after the first optimization run (in grey), and corresponding 1D-FFD morphing box and its control points position (in red)

Fig. 15
figure 15

Example of the orthotropic specimen with two holes: evolution of the cost function value and specimen shape during the second optimization process (after the remeshing step)

Again, the evolution of the transformation Jacobian of each Gauss point is plotted against the iteration number (see Fig. 16). This time, the constraint is not active at convergence. What stopped the optimization are bounds that were arbitrarily chosen for the design variables.

Fig. 16
figure 16

Example of the orthotropic plate with two holes: evolution of the transformation Jacobian of each Gauss point as a function of the iteration number for the second optimization run (\(\epsilon _{Jac} = 0.1\))

Overview of the results To have an overview of how all three parameters sensitivity have been impacted by the optimization process, looking at \({\mathbf {H}}_{FEMU}^{-1}\) and its eigenvalues \(\lambda _i\), as detailed in Table 1, is convenient. Here, the parameters are rather decoupled (covariance values are small compared to variance values), and the worst initially identifiable parameter was \(\frac{E_2}{E_1}\). At the end of the first optimization step, its variance is divided by 10 and parameters are still rather decoupled. At the end of the second optimization step, its variance decreases again. Note that some parameters are now coupled (\(\frac{E_{2}}{E_1}\) and \(\nu _{12}\)). It can be also observe that \(\frac{G_{12}}{E_1}\) variance decreases at each step, and \(\nu _{12}\) variance slightly increases, in such a way that the eigenvector \({\mathbf {w}}_i\) associated to the greatest eigenvalue at the last step has its larger component on this constitutive parameter.

Table 1 Covariance matrix \({\mathbf {H}}_{FEMU}^{-1}\) (the three parameters are ordered as: \(\frac{E_2}{E_1}\), \(\frac{G_{12}}{E_1}\), \(\nu _{12}\)), its eigenvalues \(\lambda _i\) and the associated eigenvectors \({\mathbf {w}}_i\) (same order)

These observations can also be made by looking at the sensitivity fields for each constitutive parameter. Indeed, Fig. 17 shows that the sensitivity magnitude increases for \(\frac{E_2}{E_1}\) and \(\frac{G_{12}}{E_1}\), and slightly decreases for \(\nu _{12}\).

Fig. 17
figure 17

Sensitivity fields magnitude for each constitutive parameter, initially and at each optimization step

Fig. 18
figure 18

Example of the orthotropic specimen with two holes: evolution of the cost function value and zooms on the hole shapes during the optimization process (including one remeshing step each) for 2 different initial positions of the holes

A very ill-posed problem

To illustrate the high ill-posedness of the problem, the algorithm is started from two different initial shapes. The orthotropic linear elastic open hole tensile test of the previous section was considered again with two slightly different initial positions of the holes.

Figure 18 presents the evolution of the objective function during the convergence of the algorithm for both initializations along with the evolution of the corresponding shapes. It can be noticed that the two different initial guesses lead to significantly different optimized shapes. However, the objective functions of the optimized shapes is of the same order of magnitude, which means that they are two different new designs which both significantly improve identification. This illustrates the presence of a possibly large number of local minima. This property strengthen our choice to consider reduced and regular design spaces in such problems instead of too large and too generic design spaces like FE-based or topology optimization. From an engineering point of view, considering the very same algorithm with multistart could be interesting as it may provide a set of equivalently optimal shapes, that the engineers could select based on other criteria like, for instance, machinability.

Conclusion

In this paper, a methodology has been proposed to improve the sensitivity of a test to the constitutive parameters of a given model through specimen shape optimization. Our approach relies on non-invasive spline tools to enable kipping a classic FE formalism and yet limit optimization to regular, manufacturable designs. It is also meant to fill the gap between the existing approaches in terms of design space halfway between topology and parametric optimization. Indeed, the FFD-based reduced basis approach developed herein allows working with few design variables, yet keeping a rich search space that leads to a vast range of geometries, provided that their topology is the same. We considered the case where a pre-design of the specimen was available and the goal was to further improve its geometry, with respect to the sensitivity to constitutive parameters for given boundary conditions. In other words, the method is not intended to determine the ideal specimen (if one exists), but more to improve an existing empirically designed specimen.

Overall, the developed solution strategy consists in a non-invasive modern CAD-based shape optimization that could obviously be applied successfully to other optimization problems (such as structural shape optimization).

To define the optimization problem, the starting point was the cost function formulation of Bertin et al. [20], which makes sense from experimental and mathematical points of view. This objective function was improved including information coming from the camera modeling, in order to tackle possible size changes throughout the optimization process. Furthermore, material sensitivity fields, which are the derivatives of the displacement field with respect to constitutive parameters, were computed analytically, i.e. without using Finite Difference methods. Constraint functions were also added to ensure that the obtained geometry remained physically sound. The first one involves the Jacobian of the FE mesh transformation from the reference elements to the physical element and ensures that no element is flipped or too much distorted. The second one sets a maximum equivalent strain, which is kept equal to the initial maximum equivalent strain in order to guarantee that the cost function decrease could not be impacted by the boundary conditions values. Let us note that these modeling ingredients could be applied to other optimization solution strategies (independently of the definition of the design variables), which should improve the obtained specimen shape in any case.

The validation of our methodology was in particular carried out on a tension beam with two holes \(C^1\)-continuous periodic B-splines were used to create univariate morphing boxes that control the FE nodes on the edge of each hole. The edge deformation then propagated to the rest of the mesh via the solution of an elastic morphing problem. Like this, no remeshing is necessary during the optimization process. The cost function value was reduced by a factor 10 plus an additional factor 3 if the sample is re-meshed and the optimization run again. Overall, the method was able to reduce the identification uncertainty by a factor 6.

This work offers many opportunities for future investigations. Considering only the solution of the optimization problem, a multilevel optimization of the shape could be performed. Other algorithms or multi-start procedures could also be employed to avoid being too dependent on the initialization. From the problem modeling point of view, a first improvement could consist in making a difference between the ROI (where a good sensitivity is needed) and the whole structure (on which the mechanical problem is solved). The position and size of the ROI could also be defined as design variables. Finally, the extension to non-linear material behaviors constitutes one of the major prospects of this work. Although the proposed methodology may be applied from a fundamental point of view, its detailed implementation in case of non-linear constitutive laws will require further developments. It will not be straightforward, for example, to calculate the sensitivity displacement fields w.r.t. the constitutive parameters in a fully analytical way. The proper computation of such fields will depend on the non-linearity type, and it may become invasive with respect to the simulation tool. If it is desired to remain non-invasive, brute global finite difference approaches may be required, thereby involving large computational cost. Furthermore, from a modeling point of view, boundary conditions should be considered as design variables as well in a non-linear context.

Availability of data and materials

Not applicable.

References

  1. Kirchdoerfer T, Ortiz M. Data-driven computational mechanics. Comput Methods Appl Mech Eng. 2016;304:81–101.

    Article  MathSciNet  MATH  Google Scholar 

  2. Oden JT, Belytschko T, Fish J, Hughes TJR, Johnson C, Keyes D, Laub A, Petzold L, Srolovitz D, Yip S. Simulation-based engineering sciences. Technical report, NFS ; 2006. www.nsf.gov/pubs/reports/sbes_final_report.pdf.

  3. Chinesta F, Cueto E, Abisset-Chavanne E, Duval JL, El Khaldi F. Virtual, digital and hybrid twins: a new paradigm in data-based engineering and engineered data. Arch Comput Methods Eng. 2020;27:105–34.

    Article  MathSciNet  Google Scholar 

  4. ASTM: Standard test method for in-plane shear response of polymer matrix composite materials by tensile test of +/- 45 degree laminate. American Society for Testing and Materials. 1994. D3518/D3518M, 105–119.

  5. Chisholm SA, Castro JF, Chapman BD, Karayev KZ, Gunther AJ, Kabir MH. Smarter testing through simulation for efficient design and attainment of regulatory compliance. In: Niepokolczycki A, Komorowski J editors. ICAF 2019—Structural Integrity in the Age of Additive Manufacturing, 2020; pp. 292–307.

  6. Grédiac M, Hild F. Full-field measurements and identification in solid mechanics. Wiley; 2012.

  7. Avril S, Bonnet M, Bretelle A-S, Grédiac M, Hild F, Ienny P, Latourte F, Lemosse D, Pagano S, Pagnacco E, Pierron F. Overview of identification methods of mechanical parameters based on full-field measurements. Exp Mech. 2008;48(4):381–402.

    Article  Google Scholar 

  8. Roux S, Hild F. Optimal procedure for the identification of constitutive parameters from experimentally measured displacement fields. Int J Solids Struct. 2020;184:14–23.

    Article  Google Scholar 

  9. Kavanagh KT, Clough RW. Finite element applications in the characterization of elastic solids. Int J Solids Struct. 1971;7:11–23.

    Article  MATH  Google Scholar 

  10. Mathieu F, Leclerc H, Hild F, Roux S. Estimation of elastoplastic parameters via weighted FEMU and integrated-DIC. Exp Mech. 2015;55(1):105–19.

    Article  Google Scholar 

  11. Serra J, Pierré J-E, Passieux J-C, Périé J-N, Bouvet C, Castanié B. Validation and modeling of aeronautical composite structures subjected to combined loadings: the vertex project. part 1: experimental setup, fe-dic instrumentation and procedures. Compos Struct. 2017;179:224–44.

    Article  Google Scholar 

  12. Molimard J, Riche R, Vautrin A, Lee JR. Identification of the four orthotropic plate stiffnesses using a single open-hole tensile test. Exp Mech. 2005;45(5):404–11.

    Article  Google Scholar 

  13. Passieux J-C, Bugarin F, David C, Périé J-N, Robert L. Multiscale displacement field measurement using digital image correlation: application to the identification of elastic properties. Exp Mech. 2015;55(1):121–37.

    Article  Google Scholar 

  14. Kajberg J, Lindkvist G. Characterisation of materials subjected to large strains by inverse modelling based on in-plane displacement fields. Int J Solids Struct. 2004;41(13):3439–59.

    Article  MATH  Google Scholar 

  15. Demmerle S, Boehler JP. Optimal design of biaxial tensile cruciform specimens. J Mech Phys Solids. 1993;41(1):143–81.

    Article  Google Scholar 

  16. Pottier T, Toussaint F, Vacher P. Contribution of heterogeneous strain field measurements and boundary conditions modelling in inverse identification of material parameters. Eur J Mech A/Solids. 2011;30(3):373–82.

    Article  MATH  Google Scholar 

  17. Schmaltz S, Willner K. Comparison of different biaxial tests for the inverse identification of sheet steel material parameters. Strain. 2014;50(5):389–403.

    Article  Google Scholar 

  18. Conde M, Andrade-Campos A, Guimaraes M, Martins JMP. Design of heterogeneous interior motched specimens for material mechanical identification. In: ESAFORM 2021 2021.

  19. Kim Y, Zhang S, Grolleau V, Roth CC, Mohr D, Yoon JW. Robust characterization of anisotropic shear fracture strains with constant triaxiality using shape optimization of torsional twin bridge specimen. CIRP Ann. 2021;70(1):211–4.

    Article  Google Scholar 

  20. Bertin MBR, Hild F, Roux S. Optimization of a cruciform specimen geometry for the identification of constitutive parameters based upon full-field measurements. Strain. 2016;52(4):307–23.

    Article  Google Scholar 

  21. Réthoré J. A fully integrated noise robust strategy for the identification of constitutive laws from digital images. Int J Numer Meth Eng. 2010;84(6):631–60.

    Article  MATH  Google Scholar 

  22. Feld N, Laeuffer H, Roux S. Mesure de champs hétérogènes pour l’identification du comportement de matériaux composites stratifiés. In: Comptes-Rendus des JNC 19 2015.

  23. Chamoin L, Jailin C, Diaz M, Quesada L. Coupling between topology optimization and digital image correlation for the design of specimen dedicated to selected material parameters identification. Int J Solids Struct. 2020;193:270–86.

    Article  Google Scholar 

  24. Colantonio G, Chapelier M, Bouclier R, Passieux J-C, Marenić E. Noninvasive multilevel geometric regularization of mesh-based three-dimensional shape measurement. Int J Numer Methods Eng. 2020;121(9):1877–97.

    Article  MathSciNet  Google Scholar 

  25. Chapelier M, Bouclier R, Passieux J-C. Free-Form Deformation Digital Image Correlation (FFD-DIC): a non-invasive spline regularization for arbitrary finite element measurements. Comput Methods Appl Mech Eng. 2021;384: 113992.

    Article  MathSciNet  MATH  Google Scholar 

  26. Sederberg TW, Parry SR. Free-form deformation of solid geometric models. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, pp. 151–160; 1986.

  27. Kavanagh KT, Clough RW. Finite element applications in the characterization of elastic solids. Int J Solids Struct. 1971;7(1):11–23.

    Article  MATH  Google Scholar 

  28. Collins JD, Hart GC, Hasselman T, Kennedy B. Statistical identification of structures. AIAA J. 1974;12(2):185–90.

    Article  MATH  Google Scholar 

  29. Sutton MA, Wolters WJ, Peters WH, Ranson WF, McNeill SR. Determination of displacements using an improved digital correlation method. Image Vis Comput. 1983;1(3):133–9.

    Article  Google Scholar 

  30. Sutton MA, Orteu J-J, Schreier H. Image correlation for shape, motion and deformation measurements: basic concepts, theory and applications. Berlin: Springer; 2009.

    Google Scholar 

  31. Besnard G, Hild F, Roux S. “finite-element’’ displacement fields analysis from digital images: application to portevin-le châtelier bands. Exp Mech. 2006;46(6):789–803.

    Article  Google Scholar 

  32. Leclerc H, Périé J-N, Roux S, Hild F. Integrated digital image correlation for the identification of mechanical properties. In: International conference on computer vision/computer graphics collaboration techniques and applications, pp. 161–171; 2009.

  33. Hild F, Roux S. Comparison of local and global approaches to digital image correlation. Exp Mech. 2012;52(9):1503–19.

    Article  Google Scholar 

  34. Passieux J-C, Périé J-N. High resolution digital image correlation using Proper Generalized Decomposition: PGD-DIC. Int J Numer Methods Eng. 2012;92(6):531–50.

    Article  MathSciNet  MATH  Google Scholar 

  35. Rouwane A, Bouclier R, Passieux J-C, Périé J-N. Architecture-Driven Digital Image Correlation Technique (ADDICT) for the measurement of sub-cellular kinematic fields in speckle-free cellular materials. Int J Solids Struct. 2022;234–235: 111223.

    Article  Google Scholar 

  36. Mahalanobis PC. On the generalised distance in statistics. Proc Natl Inst Sci. 1936;2:49–55.

    MATH  Google Scholar 

  37. Passieux J-C, Bouclier R. Classic and inverse compositional Gauss-Newton in global DIC. Int J Numer Methods Eng. 2019;119(6):453–68.

    Article  MathSciNet  Google Scholar 

  38. Allaire G, Jouve F, Toader A-M. Structural optimization using sensitivity analysis and a level-set method. J Comput Phys. 2004;194(1):363–93.

    Article  MathSciNet  MATH  Google Scholar 

  39. De Gournay F, Allaire G, Jouve F. Shape and topology optimization of the robust compliance via the level set method. ESAIM Control Optim Calc Var. 2008;14(1):43–70.

    Article  MathSciNet  MATH  Google Scholar 

  40. Seo Y-D, Kim H-J, Youn S-K. Isogeometric topology optimization using trimmed spline surfaces. Comput Methods Appl Mech Eng. 2010;199(49–52):3270–96.

    Article  MathSciNet  MATH  Google Scholar 

  41. Dedè L, Borden MJ, Hughes TJR. Isogeometric analysis for topology optimization with a phase field model. Arch Comput Methods Eng. 2012;19(3):427–65. https://doi.org/10.1007/s11831-012-9075-z.

    Article  MathSciNet  MATH  Google Scholar 

  42. Gao J, Xiao M, Zhang Y, Gao L. A comprehensive review of isogeometric topology optimization: methods, applications and prospects. Chin J Mech Eng. 2020. https://doi.org/10.1186/s10033-020-00503-w.

    Article  Google Scholar 

  43. Braibant V, Fleury C. Shape optimal design using B-splines. Comput Methods Appl Mech Eng. 1984;44(3):247–67.

    Article  MATH  Google Scholar 

  44. Firl M, Wüchner R, Bletzinger K-U. Regularization of shape optimization problems using fe-based parametrization. Struct Multidiscip Optim. 2013;47(4):507–21.

    Article  MathSciNet  MATH  Google Scholar 

  45. Hojjat M, Stavropoulou E, Bletzinger K-U. The vertex morphing method for node-based shape optimization. Comput Methods Appl Mech Eng. 2014;268:494–513.

    Article  MathSciNet  MATH  Google Scholar 

  46. Olhoff N, Bendsøe MP, Rasmussen J. On cad-integrated structural topology and design optimization. Comput Methods Appl Mech Eng. 1991;89(1–3):259–79.

    Article  MATH  Google Scholar 

  47. Wall WA, Frenzel MA, Cyron C. Isogeometric structural shape optimization. Comput Methods Appl Mech Eng. 2008;197(33–40):2976–88.

    Article  MathSciNet  MATH  Google Scholar 

  48. Kiendl J, Schmidt R, Wüchner R, Bletzinger K-U. Isogeometric shape optimization of shells using semi-analytical sensitivity analysis and sensitivity weighting. Comput Methods Appl Mech Eng. 2014;274:148–67.

    Article  MathSciNet  MATH  Google Scholar 

  49. Fußeder D, Simeon B, Vuong A-V. Fundamental aspects of shape optimization in the context of isogeometric analysis. Comput Methods Appl Mech Eng. 2015;286:313–31.

    Article  MathSciNet  MATH  Google Scholar 

  50. Wang Y, Wang Z, Xia Z, Poh LH. Structural design optimization using isogeometric analysis: a comprehensive review. Comput Model Eng Sci. 2018;117(3):455–507.

    Google Scholar 

  51. Hirschler T, Bouclier R, Duval A, Elguedj T, Morlier J. Isogeometric sizing and shape optimization of thin structures with a solid-shell approach. Struct Multidiscip Optim. 2019;59(3):767–85.

    Article  MathSciNet  MATH  Google Scholar 

  52. Hirschler T, Bouclier R, Duval A, Elguedj T, Morlier J. A new lighting on analytical discrete sensitivities in the context of isogeometric shape optimization. Arch Comput Methods Eng. 2021;28(4):2371–408.

    Article  MathSciNet  MATH  Google Scholar 

  53. Kenway G, Kennedy G, Martins JR. A cad-free approach to high-fidelity aerostructural optimization. In: 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, p. 9231 2010.

  54. Lassila T, Rozza G. Parametric free-form shape design with pde models and reduced basis method. Comput Methods Appl Mech Eng. 2010;199(23–24):1583–92.

    Article  MathSciNet  MATH  Google Scholar 

  55. Manzoni A, Quarteroni A, Rozza G. Shape optimization for viscous flows by reduced basis methods and free-form deformation. Int J Numer Methods Fluids. 2012;70(5):646–70.

    Article  MathSciNet  MATH  Google Scholar 

  56. Ballarin F, Manzoni A, Rozza G, Salsa S. Shape optimization by free-form deformation: existence results and numerical solution for stokes flows. J Sci Comput. 2014;60(3):537–63.

    Article  MathSciNet  MATH  Google Scholar 

  57. Andreoli M, Ales J, Désidéri J-A. Free-form-deformation parameterization for multilevel 3D shape optimization in aerodynamics. PhD thesis, INRIA 2003.

  58. Galland F, Gravouil A, Malvesin E, Rochette M. A global model reduction approach for 3d fatigue crack growth with confined plasticity. Comput Methods Appl Mech Eng. 2011;200(5–8):699–716.

    Article  MathSciNet  MATH  Google Scholar 

  59. Berny M, Jailin C, Bouterf A, Hild F, Roux S. Mode-enhanced space-time DIC: applications to ultra-high-speed imaging. Meas Sci Technol. 2018;29(12): 125008.

    Article  Google Scholar 

  60. Fouque R, Bouclier R, Passieux J-C, Périé J-N. Stereo digital image correlation: formulations and perspectives. Comptes Rendus Mécanique. 2021;349(3):453–63.

    Article  Google Scholar 

  61. Pan B, Lu Z, Xie H. Mean intensity gradient: an effective global parameter for quality assessment of the speckle patterns used in digital image correlation. Opt Lasers Eng. 2010;48(4):469–77.

    Article  Google Scholar 

  62. Fouque R, Bouclier R, Passieux J-C, Périé J-N. Fractal pattern for multiscale digital image correlation. Exp Mech. 2021;61:483–97.

    Article  Google Scholar 

  63. Garcia D. Mesure de formes et de champs de déplacements tridimensionnels par stéréo-corrélation d’images. PhD thesis, Institut National Polytechnique de Toulouse - INPT 2001.

  64. Fletcher R. Practical methods of optimization. New Jersey: Wiley; 2013.

    MATH  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This study was funded by the French Ministère de l’Enseignement Supérieur de la Recherche et de l’Innovation.

Author information

Authors and Affiliations

Authors

Contributions

All authors participated in the development of the method. First author implemented the numerical examples. Second and third author supervised the study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jean-Charles Passieux.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chapelier, M., Bouclier, R. & Passieux, JC. Spline-based specimen shape optimization for robust material model calibration. Adv. Model. and Simul. in Eng. Sci. 9, 4 (2022). https://doi.org/10.1186/s40323-022-00217-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-022-00217-9

Keywords