Spline-based specimen shape optimization for robust material model calibration

Identiﬁcation from ﬁeld measurements allows several parameters to be identiﬁed from a single test, provided that the measurements are sensitive enough to the parameters to be identiﬁed. To do this, authors use empirically deﬁned geometries (with holes, notches...). The ﬁrst attempts to optimize the specimen to maximize the sensitivity of the measurement are linked to a design space that is either very small (parametric optimization), which does not allow the exploration of very di ﬀ erent designs, or, conversely, very large (topology optimization), which sometimes leads to designs that are not regular and cannot be manufactured. In this paper, we propose an intermediate approach based on a non-invasive CAD-inspired optimization strategy which relies on the deﬁnition of univariate spline Free-Form Deformation boxes to reduce the design space and thus regularize the problem. Then, from the modeling point of view, we propose a new objective function that takes into account the experimental setup and we add constraint functions that ensure that the gain is real and the shape physically sound. Several examples show that with this method and at low cost, one can signiﬁcantly improve the identiﬁcation of constitutive parameters without changing the experimental setup.


Introduction
Along with the design of manufactured materials (composites, architectured materials, lattices structures...) or the development of advanced manufacturing processes comes the need to develop simulation tools capable of predicting the behaviour of complex parts. Although purely model-free data-driven simulation tools have been proposed very recently [1], most of the tools are essentially based on more or less sophisticated constitutive models [2] or their hybridization with artificial intelligence [3]. From an experimental point of view, identifying the possibly large number of parameters of such complex models requires designing several different experiments [4]. Calibrating a single material model can require today carrying out up to dozens of experiments, which makes the overall process very costly and time consuming.
The current trend is to develop procedures that minimize the number of tests ; maximize the benefits of the necessary tests by optimizing them (sample shape, loading path, reduced design of experiments...) ; getting more and richer data from it and better integrate it with finite element simulations. This approach is referred to as smart testing [5]. For instance, it is possible to calibrate as many parameters as possible at once from a very small number of tests thanks to identification from full-field measurements [6]. Many identification methods have been proposed [7]. Roux et al. [8] reformulated most of them as the minimization of a metric used to measure the distance between measured and computed displacement fields. In this respect, identification can be viewed as data assimilation in mechanics of material. The proposed methodology in this paper will be illustrated with the weighted Finite Element Model Updating (FEMU [9,10]), but, following [8], it could also be applied with other identification procedures.
To do this, the testing and instrumentation must meet the following two conditions [6]: first, rich instrumentation techniques must be used, such as field measurements, in order to increase the amount of data (i.e. measured quantities); second, the measured quantities must be sufficiently heterogeneous and, more precisely, sensitive to the sought constitutive parameters. To achieve this, instead of standardized (uni-axial) tests, authors considered complex specimens shapes (along with possibly complex loading conditions [11]) to better sample the material response. For instance, authors drilled holes [12,13], machined different notch shapes [14] or designed cruciform samples [15] with different shapes and fillet radii, to name a few. But the choice of notched geometry, hole radius and position is often made empirically and does not guarantee any form of optimality with regard to the identification of the parameters.
Going further in the optimization of the shape of the specimen and improving the identification of constitutive parameters require: (a) defining one or more optimality criteria (modeling stage) and (b) building a suitable shape optimization algorithm (solution stage).
Regarding the modeling stage, numerous criteria for the optimality of a specimen have been proposed in the literature, which can be classified into two main families: the first family does not consider any specific constitutive model. It relies on a measure of the heterogeneity of material states. The idea is usually to quantify the ability of a specimen to well sample the behaviour in the principal stresses/strains plane [15,16,17,18] or to target a specific triaxiality [19]. These methods are designed to ensure that the specimen correctly samples the behaviour, which is useful for discriminating between different constitutive models. Although the identification of constitutive parameters can be improved, it is not guaranteed, as the objective function does not measure the accuracy with which parameters are identified.
The second, and more recent, approach consists in optimizing the shape of the specimen with respect to one given constitutive model. The criterion, originally proposed by Bertin et al. [20] and then followed by several authors (e.g. [21,22]), is based on the minimization of the identification uncertainties, making use, more precisely, of the covariance matrix of the identified material parameters. It relies on a fine analysis of the propagation of the measurement noise in the identification process [23,8]. This approach is particularly attractive since it can allow to reduce the identified parameters uncertainty by orders of magnitude. However, let us underline that the modeling is usually based on the sole minimization of a cost function (without constraints) and that the latter is only related to the spectrum of the covariance matrix. As such, there may be competitions between the specimen shape and other experimental settings. For instance, from an experimental point of view, if the overall size of the specimen changes, the field of view of the camera 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 must be adapted so that it covers the entire specimen. This will modify the image resolution and therefore the uncertainty. On the other hand, in the above cited works, the maximum strain can increase during optimization. There is then a competition between modifying the shape of the specimen and increasing the loading level, whereas the latter is not a design variable.
Concerning the solution stage, it has first to be said that most studies do not truly run optimization algorithms but rather use the two above criteria families to compare existing or manually updated designs. Only few authors did rely on automatic optimization algorithms to explore more optimal designs. In this context, most of the proposed optimizations (e.g., [20,19,21]) are carried out in very small design spaces, where only geometric parameters such as radii and hole positions are modified. This is generally known as parametric optimization. The advantage with one or two design variables is that a real optimization algorithm is not mandatory, since a graphical or manual optimization may be sufficient. However, this is reduced to very limited variations of the geometry. At the opposite extreme, very large design spaces were considered to explore completely generic designs in [22] where the authors developed a SIMP-type topology optimization algorithm. The method has the advantage of being able to evolve the topology and therefore to create holes or notches if necessary. Yet, there are still many limitations. The design space is very large which results in prohibitive computational costs and may lead to unrealistic irregular shapes. The optimized shape must be redesigned manually in a postprocessing phase to improve regularity and/or machinability without considering the effect on optimality nor on the sensitivity fields.
In this paper, it is proposed to follow an intermediate path regarding the design space. The starting point is a manually designed or parametrically optimized specimen that we would like to further improve. We propose to use spline-based geometric shape optimization [18] because it allows free-form geometry modifications with unchanged topology. It is possible to keep a low number of design variables and a design sub-space made of regular shapes thanks to spline functions. However, for the sake of having a generic and simple method from an implementation point of view, let us underline that we consider a "classical" Finite Element (FE) mesh for the computation (with a sufficient refinement to obtain good accuracy). We therefore develop a non-invasive CAD-based optimization strategy with the help of univariate spline Free-Form Deformation (FFD) boxes that relate the movement of the FE mesh to spline design variables during the optimization. Formally, the spline design space can be interpreted as a reduced-order space from a FE design space that would be directly associated to the FE mesh, which enables to regularize the optimization problem in a flexible way (see, e.g., [24,25] for similar ideas in other contexts). Then, from a modeling point of view, we aim at improving the sensitivity to several constitutive parameters, for the purposes of identifying several parameters at once. This leads us to make modifications to the usual formulation of the optimization problem (objective functions and constraints), in particular to take into account the image resolution and the loading magnitude.
This paper is organized as follows: Section 2 quickly reviews the process of parameter identification, namely the FEMU method and its functional, and the covariance matrix that comes from coupling FEMU to Digital Image Correlation (DIC) and that gives a representation of the uncertainty over the identified constitutive parameters. Section 3 focuses on the non-invasive spline optimization strategy, based on FFD [26] as reduced-order modeling for the design variables, that is implemented to make the method suitable to any possible geometry. Then, Section 4 presents the developed modeling for the specimen shape optimization problem. In particular, we explain our choices regarding the cost function and constraints so that the optimization results are physically meaningful and the obtained geometries are machinable. These choices are illustrated and validated through a simple analytic example. Finally, numerical experiments are conducted in Section 5 to assess the performance of the methodology on more complex structures with orthotropic linear elastic constitutive relations, and concluding remarks are drawn in Section 6.

Constitutive parameter identification from DIC
In this work, we recall that we consider the FEMU method for material constitutive parameter identification and, without loss of generality, in the context of 2D-DIC.

FEMU method
The FEMU method consists in comparing a measured quantity to a simulated one, typically a measured displacement field on a specimen obtained with DIC to a displacement field obtained from numerical computation with similar boundary conditions and the chosen material constitutive law [27,28]. The aim is to find constitutive parameters values that minimize the discrepancy between the simulated field and the measured field. The functional to minimize thus reads: where v(q) and u are vectors gathering the simulated and measured displacements respectively. Obviously, v(q) and u in (1) are compatible, i.e. they gather Degrees Of Freedom (DOF) with the same meaning. Vector q is the set of sought constitutive parameters and H a symmetric positive definite operator such that kak 2 H = a T Ha. One can imagine that the quality of the results depends on the quality of the measurement. A consequence is that there exists a "best" norm to choose in order to quantify the discrepancy between these two fields, which consists in taking H equal to the inverse of the covariance matrix of the measured quantity [Cov u ] −1 [29,8]. This norm gives to each DOF a weight that is inversely proportional to its uncertainty.
To find q that minimizes F p (q), we perform a Gauss-Newton algorithm. Starting from an initial set q (0) , each Gauss-Newton iteration k updates the values of the constitutive parameters as follows: q (k) is computed as the solution of a linear system: where v (k) = v(q (k) ) and r q is the gradient with respect to the sought constitutive parameters, i.e. line i is @ @qi , the sensitivity field to parameter q i . Note that the normalization of the constitutive parameters is of utmost importance for such analysis.

Noise propagation in FE-DIC
In this section, the analysis focuses on the case of full-field measurement performed by DIC [30,31]. Simulated displacement fields generally come from FE software and therefore are expressed at the nodes of a FE mesh. An easy way to compare a measured displacement field with a simulated one is to seek the measured displacement field in the same FE space as used for the simulation. It is usually referred to as FE-DIC in the community [32,33,23,34,13]. Note that there exists many weak [23,35,36] or strong [33,25] regularization techniques which make it possible to perform FE-DIC on arbitrarily fine (let us say analysis-suitable) FE meshes and which remove any limitation for element size (or degree, or type) in the simulation. From now on, v and u will represent the displacement values (DOF) at the nodes of a unique FE mesh (same for simulation and measurement).
Errors can arise from any step of the DIC procedure (speckle pattern quality, light, air heat gradient, camera calibration, camera noise, subpixel interpolation, displacement field numerical approximation...). As commonly performed in the field, we consider here that the main contribution to errors comes from camera noise. It is thus possible to derive the covariance of the measured displacements at each FE node very conveniently in FE-DIC. Indeed, following [32,34,20], we assume that the reference image I and the deformed image J are independently affected by white noise of variance 2 or equivalently that only J is affected by white noise of variance 2 2 . In this context, it can be shown that the covariance of the measured displacements is such that Operator H DIC is the approximation of the Hessian of the DIC functional which is a cost-free output of the DIC problem and which has the following form [37]: where I is the Region Of Interest (ROI) in the images and N the matrix gathering the FE shape functions.

Noise propagation in FEMU
From (3), the covariance of the identified constitutive parameters reads [23,20]: where r q v is the sensitivity field with respect to a set of parameters that should be close to the actual ones (see beginning of Section 4.1.3 for further details). This covariance matrix [Cov q ] is of size n q ⇥ n q with n q the number of sought constitutive parameters. It contains helpful information in order to derive a representative criteria of a "good" constitutive parameter identification [20].

A spline-based regular reduced-order design space
Before detailing the developed modeling of the specimen shape optimization problem (i.e., the cost function along with the constraints), let us introduce here the strategy implemented in terms of design space to solve the problem. From a general point of view, it is necessary to (i) choose a suitable and flexible design space to parametrize the shape modifications and (ii) build an efficient technique to update the FE mesh when the geometry evolves. Let us remind that the method relies on a unique FE mesh which is used for both the simulation and the measurement. It is this mesh that describes the geometry of the sample and its updating.

Design space using spline FFD
The displacements of the mesh nodes could be directly used as design variables. The literature in the field of structural shape optimization shows that it corresponds to excessively large design spaces, which may lead to irregular shapes that inherits the C 0 properties of FE basis functions [38].
To circumvent these issues, we draw inspiration from the structural shape optimization community and use a more regular spline space as a search space [39,40,41]. The spline functions are obviously well suited for shape optimization since they have been built for geometric modeling in CAD: they are of higher regularity and thus imply few DOF (mainly associated to the control points positions of the spline entities) to describe a geometry and, more importantly, a geometry update. For specimen shape optimization in our context, let us note that splines were also used in [18], where the obtained spline geometry was given as an input to Abaqus for each computation in the optimization process. Such a technique may require a lot a remeshing steps, especially when estimating sensitivities.

Standard FFD
Our goal is to take advantage of the spline properties but also to use them in a non-invasive way so that usual FE simulation software can be called upon without remeshing. To do so, we propose to rely on FFD techniques that, in some sense, build the spline space as a vector subspace of the FE vector space [26,25]. FFD consists in embedding the initial FE geometry into a (usually) spline morphing box. Therefore the deformation of the FE mesh during the shape update can be constrained to follow the deformation of the morphing box, the latter being related to the movement of its control points. More precisely, we apply the deformation of the morphing box to the nodes of the FE mesh; that is, we prescribe each FE node to move as the point at the corresponding location inside the morphing box. Denoting by s fe and s cp the vectors collecting the FE design variables (i.e. the movement of the FE nodes) and spline design variables (i.e. the movement of the box control points), respectively, we can thus write: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 where C FFD is the FFD operator that gathers the evaluation of the spline functions at the FE nodes of the mesh, see [25]. This method is interesting because it decouples the design space (spline-based) from that of the geometry description (FE-based). Not all DOF associated to each control point of the morphing box are necessarily considered as independent design variables. A linear operator C s can be defined to group some of the design variables in s cp : which leads to consider vector s that gathers the truly independent design variables.
Recapitulating, the link between the design variables s and s fe can be written as follows: which exhibits the reduced-order treatment performed in terms of design. At this stage, let us mention that, in opposition to classic reduced basis methods in simulation, C update is sparse.

Univariate spline-based FFD
Following the common practice in shape optimization with FFD, afirstpath would be to embed the whole specimen FE mesh into a single bivariate FFD morphing box [25]. The main advantage is that it would be possible to update, up to a certain extent, the whole mesh directly with control points. The problem is that it introduces unnecessary control points in volume which can distort the mesh without changing the shape. Alternatively, we propose to create in this work one univariate morphing box for each geometrical feature edge (circle, notch, line...) that we want to optimize. Therefore, the morphing box, which is a curve in 2D has to conform to the initial geometry of the feature edge it controls. Similarly to Eq. (6), we end up with a FFD matrix C e FFD linking the FE design variables s e fe that move the edge (i.e. not all the FE mesh nodes) to the design variables s e cp associated to the control points of the spline FFD curve: From a practical point of view, this variant requires knowing the location of the FE nodes in the parametric space of the spline FFD curve. This may imply inverting the spline curve mapping (the parametric space is no longer the same as the physical space as with simple bivariate boxes). For an easiest treatment, we actually define the edge nodes directly in the parametric space of the FFD curve and map them onto the physical space using the spline transformation before generating the FE mesh. Further details will be given in Section 5 (see, in particular, Fig. 8).
With this technique, all control points have an equivalent influence on the edges they control, which eliminates the issues coming from the choice of suitable volume design variables and from ill-conditioning (if a volume control points has a slight influence on the geometry) as in [25]. Moreover, refinement of the box does not lead 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 to extra treatment if needed. However, the drawback is that only a subset of the FE nodes is controlled by the modification of the morphing boxes. We thus need to build an efficient technique to update the FE mesh in the bulk, given an evolution of the features edge nodes through s e fe .

Mesh updating strategy
In order to avoid remeshing the domain during the optimization, we propose to use a mesh morphing technique that allows to propagate inside the domain a variation of the edge geometry through s e fe . To do so, a simple linear elastic boundary value problem is defined with a unitary Young modulus E = 1 and a zero Poisson ratio. The deformation of the edges is considered as a prescribed displacement on the corresponding nodes, and the goal is to obtain the resulting displacement s fe of all FE nodes, which is the sought mesh correction field [42,43]. In practice, a static condensation is performed which leads to the morphing matrix C m such that: where K m rr and K m re are the sub-matrices of the stiffness matrix K m of the morphing problem and indices e and r define the DOF of the edge nodes and of the remaining nodes, respectively. The inverse of operator K m rr is obviously not explicitly computed. Only a LU factorization is performed and an efficient multi right-hand side solver is used to compute C m .
From all these steps, we can define a unique matrix C update that reduces the initial design space related to the movement of all FE nodes to a low-dimensional design subspace associated to the sole independent edge design variables collected into s e : Obviously, operator C e s is the counterpart of C s in (7) for edge design variables. Once again, this composition of operators allows an explicit link between (few) edge design variables and the movement of all mesh nodes. Figure 1 illustrates the transformations involved by the different operators. Starting from any FE mesh, it defines a reduced design subspace, similarly to the reduced basis methods in computational mechanics which perform a projection onto a reduced approximation subspace. This operator is assembled once before starting the optimization solver.
Excessive element distortion results in a small transformation Jacobian determinant whereas it becomes negative for flipped elements. Therefore, a constraint function was defined that helps keeping this quantity above a certain positive threshold ✏ Jac at each Gauss point throughout the optimization process. Besides, this quantity is already computed to obtain the stiffness matrix needed for the cost function, so it adds a marginal computational cost. For computation purposes, the constraint function is normalized with the initial values of the transformation Jacobian determinant. The constraint reads: where n pg is the number of Gauss points in the FE model, (det(J)) k is the transformation Jacobian determinant at Gauss point k,(det(J 0 )) k is the initial transformation Jacobian determinant at Gauss point k, and ✏ Jac is the chosen threshold.
Remark. If at convergence of the optimization algorithm, this constraint is active, it means that the morphing may have distorted the mesh too much. This is a good indicator that allows us to remesh the domain with the new shape and restart a new optimization phase [43].

Appropriate modeling of the specimen shape optimization problem
Now that the design space has been defined, we present our modeling approach to formulate the original shape optimization problem. We first review the cost function used today in the field (see, e.g., [21,20,22]), based on the knowledge on noise propagation in the identification workflow recalled in section 2, for the purpose of optimizing the shape of a specimen to best identify the constitutive parameters of as p e c i fi cm o d e l .Then, we improve it and complement the optimization problem with constrains in order to get a physically sound geometry. An example with a simple tension beam is presented at the end of this section to validate our modeling choices.

Construction and evaluation of the cost function
As recalled in Section 2, the covariance matrix provides an estimation of the quality of the identified parameters [21,20,22]. Improving the quality of the identification can thus be achieved by "minimizing" this covariance matrix that expresses the uncertainty on the constitutive parameter values, i.e. by "minimizing" [Cov q ](s e )=H −1 FEMU (s e ) (see Eq. (5)) with respect to design variables s e that modify the geometry of the specimen. In this section, we specify what "minimize" can mean for a matrix.

Physical meaning of the eigenvalues of the covariance matrix
The idea proposed in [21,20,22] is to work on the eigenvalues of the covariance matrix. These eigenvalues have a physical meaning when looking at the multivariate normal distribution associated to the covariance matrix [Cov q ]. In fact, the 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 isosurface where q T [Cov q ] q equals 1 is an ellipsoid where each principal semi-axe direction is given by an eigenvector of [Cov q ], and their size is the square root of the associated eigenvalue. An illustration is shown with 2 variables on Fig. 3. To minimize the uncertainty on the identified constitutive parameters, this ellipse should be as small as possible, meaning that [Cov q ] eigenvalues should be as small as possible. These eigenvalues also correspond to the inverse of the FEMU functional curvatures near the optimum set of constitutive parameters, since [Cov q ]=H −1 FEMU and H FEMU is an approximation of the Hessian matrix of F p (see Eq. (1)). With this point of view, decreasing [Cov q ] eigenvalues improves the FEMU functional convexity, which also translates in a better confidence in the identified constitutive parameters.

Defining a criterion over eigenvalues
In order to perform an optimization of the constitutive parameter identification procedure, we need to derive a unique scalar criterion from [Cov q ] eigenvalues. Several choices exist.
Feld et al. [21] minimize the ratio of the largest eigenvalue over the lowest. This choice improves the conditioning of [Cov q ] and also H FEMU . Hence, the numerical errors are reduced during the Gauss-Newton FEMU minimization. However, it does not necessarily enhance the sensitivity to the sought parameters. Indeed, an increase of both eigenvalues can lead to a decrease of this ratio, which means that the optimized experiment may be less sensitive to the sought constitutive parameters. Graphically, with the example of Figure 3, this criterion leads to an ellipse looking more like a circle, but it does not affect the size of that circle.  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 Bertin et al. [20] and Chamoin et al. [22] suggest that the determinant of [Cov q ] can be used as a criterion -they call it the uncertainty volume. In this case, all eigenvalues are contained equally in the criterion. But once again, it does not necessarily enhance the sensitivity to all constitutive parameters. As a matter of fact, the optimization procedure can lead to decreasing some eigenvalues and increasing others at the same time. In this case, parameters that were not well identified with the initial experiment can be even less well identified in the optimized experiment. Graphically, with the example of Figure 3, this choice results in an ellipse with a smaller surface. Yet it does not prevent the ellipse from getting thinner and longer.
Bertin et al. [20] and Chamoin et al. [22] minimize the largest [Cov q ] eigenvalue. In this case, only the worst parameter sensitivity matters. This choice ensures that each constitutive parameter will be identified with an uncertainty that will not be greater than the initial maximum uncertainty. Graphically, this choice makes the circumscribed circle of the ellipse smaller. The shape of the ellipse can vary but its largest semi-axis is necessarily smaller at the end of the optimization than the largest initial semi-axis. In the same spirit, the optimization problem will consist in minimizing the largest eigenvalue of [Cov q ] or equivalently to maximize the smallest eigenvalue of H FEMU : = arg min = arg min The objective function to minimize, scales with the image noise variance 2 .O n e way to improve parameter identification from full-field measurement thus consists in developing alternative DIC algorithms which are less sensitive to image noise [44,45]. The noise being independent on the sample shape, it will not be taken into account in the sequel.

Computation of the sensitivity fields w.r.t. constitutive parameters
First, in order to compute the sensitivity fields r q v, it is necessary to choose values for the different constitutive parameters of the selected constitutive law, because of the need to know v(q). These parameters are defined numerically before the optimization process and are not modified during the optimization. Hence, it may be preferable to choose values that are not too different from those expected.
Since we aim at identifying linear elastic constitutive parameters, it is possible to compute the derivative analytically for the simulated field with respect to a given parameter q i .I n d e e d ,v comes from a FE static problem resolution Kv = f ,w h e r e the stiffness matrix K depends on q and the applied load f does not. The derivative reads: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 which leads to: where , qi denotes @ @qi . Since the FE basis functions do not depend on q, only the Hooke matrix derivative is needed to compute K, qi . This derivative can be computed exactly when the constitutive law is linear. K, qi can then be assembled with the same routines as for the stiffness matrix K using the Hooke matrix derivative instead of the Hooke matrix.
Each parameter q i is usually normalized with its chosen initial value q 0 i in the FEMU minimization process (to reach a balance of the sensitivities of the different parameters). Let us denote by q i the normalized parameter such that q i = q 0 i q i . As ar e s u l t ,w eu s ev, q i instead of v, qi , and we have the following relation:

Approximation of the DIC Hessian to take into account the camera's field of view
The last thing to compute the cost function in (15) is to evaluate the approximation of the Hessian H DIC of the DIC functional. This operator is given as an output of the DIC problem. Its form is detailed in (4). The optimization of the specimen geometry is intended to be carried out before any experiment. And since the speckle pattern is generally not known ap r i o r i ,t h eH DIC matrix cannot be computed as such, because I and thus rI do not yet exist. Feld et al. [21] proposed to replace H DIC with an identity matrix. In contrast, Bertin et al. [20] and Chamoin et al. [22] recommended the use of a mean-field assumption. This latter assumption consists in considering that the graylevel gradient rI varies a lot more than the shape functions N in (4). As a consequence, we can make the following approximation: with M is a pseudo mass matrix and G I is a measure of the mean graylevel gradient similar (yet not exactly equal) to the Mean Intensity Gradient (MIG) as defined by Pan et al. [46]. Similarly to the image noise variance term, we chose to remove this factor from the objective function in this study, since its value depends on a speckle pattern unknown at this stage and it is constant with respect to the design variables s e . Note that it is possible to maximize this value by working on the pattern itself regardless of the shape [46,47]. Now, since we aim at optimizing the shape of the specimen, its overall size may change. Consequently, the experimental set-up may have to be adapted, especially the camera's field of view. Indeed, if the specimen is larger, the camera has to be 1 2 3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 placed further away (increase of the distance Z between the object and the camera) or we need to reduce the focal length f in order to capture the whole ROI. By doing so, a given physical displacement will lead to a smaller displacement in pixels, given the finiteness of the image definition. To take this effect into account, we write the H DIC matrix by considering physical fields, in contrast to (19) and [20,22] where fields are expressed in the images. To this end, we make the change of variables I = p(Ω) in (4), where Ω is the physical ROI on the specimen, and p is the camera projector model [48]. To improve readability, we redefine N to be FE shape functions in the physical ROI Ω, as if we had written N p −1 instead of N in the previous equations where quantities are defined in the image. H DIC becomes: Considering a pin-hole camera model (see, e.g., [48]), which is a low-order camera model, it is possible to express rp. Indeed, the pinhole model states: where f x and f y , in pixels, are the camera focal sampling parameters along the 2 image directions, Z is the distance between the specimen and the camera, and x 0 and y 0 are the center of the image (in pixels). The projector gradient can then be obtained: H DIC thus reads: and where M is a true mass matrix computed in the FE coordinate system. We end up with an additional weighting coefficient which depends on the experimental setting: f x (respectively f y ) depends on the focal length f with a parameter k x (resp. k y ) that is intrinsic to the camera (it corresponds to a number of pixels per meter on the sensor), such that f x = k x f (resp. f y = k y f ) (see, again, [48]).
Assuming k x ⇡ k y = k, the DIC Hessian reads: This expression has the advantage of taking explicitely into account camera settings that have an effect on the field of view. It is theoretically possible to increase the sensor definition k to improve identification by considering higher definition sensors. But as this parameter depends on the choice of a camera and is anyway limited to the technological capabilities, it is considered fixed.  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 So far, we are only looking to improve a test piece, without considering a particular experimental setup. In the following, we formulate an equivalent objective function that does not involve any camera parameters. Indeed, by application of Thales theorem and assuming that pictures are always taken so as to maximize the images resolution on the ROI, the coefficient f 2 Z 2 can be linked to the surface ratio of area of the sensor S 0 to that of the ROI S: S depends on the specimen geometry. Therefore, it should be in the cost function, in the form of the surface of the rectangle that is tangent to the ROI. Conversely, S 0 being the sensor size, it is considered fixed. Eventually, the DIC Hessian is approximated by: 4.3 Improvement with constraints to take into account the boundary conditions magnitude We consider tension tests for the identification of elastic constitutive parameters. In this work, only the shape is optimized. Boundary conditions are fixed during the optimization process.
As mentioned in the introduction, studies of the literature do not limit the maximum admissible strain or stress. As a result, there may be a competition between the loading magnitude and the shape parametrization. Indeed, if the maximum strain can increase during shape optimization, then the question is: which shape is better? (1) the optimized one with a fixed loading or (2) the initial one with an increased loading? This competition is due to the fact that the loading magnitude is not one of the design variables. It seems extremely important to consider the loading level in the optimization problem. In the following, given the linear elasticity context, we choose to optimize the shape without changing either the force (or imposed displacement) applied to the specimen or the maximum strain.
Limiting the maximum strain does not prohibit having a specimen which presents a strain concentration. It only allows to separate the improvement due to a better strain distribution within the specimen from that due to the increase of the strain level due to loading magnitude increase. Finally, a model often needs to be calibrated in a certain strain range and this optimization constraint also allows the experiment to sample the behavior in a certain strain range.

Resulting constrained optimization problem
Recapitulating, let us now write the full constrained optimization problem that we solve. As stated above, although they could be optimized independently, the external loading magnitude, the image noise , the image gradient G I , the sensor definition k and the sensor size S 0 are considered fixed since they do not depend on the shape of the object. Consequently, we end up with the following constrained optimization problem to solve: 1 (" eq (s e )) k " max 0, 8k 2 [1..n pg ].

Analysis of a toy problem
To analyze the cost function, let us consider the simple tension beam presented in Figure 4. An isotropic linear elastic material is chosen, thus involving two pa- rameters: the Young modulus E and the Poisson coefficient ⌫.S y m m e t r yDirichlet boundary conditions are applied at the bottom and right edges, the upper edge remains free, and the left edge is subjected to a uniformly distributed load. We denote by F the resultant force, which is kept constant. In this case, an analytical solution can be found when only the height h and the length L of the rectangle are the design variables. The resulting displacement field is the following: u y (x, y)= ⌫F hE y.

Cost function w.r.t. Poisson ratio only
optimizing the specimen shape through (28) with respect to the Poisson ratio only reads: without or with the weighting of the cost function by the specimen area S = hL, respectively. Without weighting, the cost function will push the sample thickness h to infinity. This is indeed relevant because the displacements related to the Poisson effect are maximum far from the horizontal symmetry axis. In reality, increasing the thickness h of the specimen will require increasing the field of view by moving back (increasing Z) or adjusting the lens (reducing f ), which, given the finiteness of the sensor, does not change the optimality of the specimen. When we take into account the field of view of the camera by adding the weight S to the functional, we see that it no longer depends on h (or L), which is consistent with the previous analysis.

Cost function w.r.t. Young Modulus only
When we consider the cost function (28) with respect to the Young modulus E only, and setting ⌫ = 0, it is defined as follows: (L, h) ? = arg min which clearly illustrates the competition between the thickness h and the resultant of the load F . In this example, optimizing the shape of the specimen (by reducing the height h) is strictly equivalent to increasing the external force F by the same amount. In this case, the intrinsic quality of the geometry is not improved. Worse, we risk producing more complex geometries, therefore more expensive to machine, whereas an increase in the load is sufficient. Finally, one could imagine even worse situations where the refinement of the specimen improves the identification less than a simple increase of the loading. It is then important to consider the magnitude of the loading in the cost function. It was done, here in an elastic context, by limiting the maximum strain to its initial value, which allows to decouple the improvement of the specimen due to an apparent increase of the loading (increase of the strains) and the one due to an improvement of the strain distribution in the specimen at constant maximum strain.

Validation on the tension beam problem
To start with, we consider again the tension beam of section 4.5, but this time, instead of the derivation of an analytic optimization, we perform the numerical resolution of problem (28) using the built-in SLSQP function from the python library 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 scipy.minimize. The shape of the specimen is optimized to minimize the variance of the Young modulus only. It is parametrized by the evolution of the top line which plays the role of the feature. The design space is built from 6 control points of a quadratic morphing box.
Only two design variables were actually chosen in this example. These design variables were chosen so as to keep the left side of the top edge horizontal, as shown in Figure 5. To do so, a C 0 line was added at x = 100 mm by repeating a knot. C 1 -continuity was then ensured by imposing some control points to move like their neighbors, hence enforcing tangential directions [41]. Exceptionally in this example, to verify that the algorithm is robust, the constraint on the maximum strain allowed was set to 6 times the strain of the initial specimen, otherwise the initial shape would already be optimal and the convergence of the algorithm could not be analysed. Figure 6 shows the results with the optimization algorithm. As one would expect, the optimal shape is still a rectangle but the height is devided by 6. This simple example allows to validate the expression of the cost function as well as the numerical FFD-based procedure for updating the geometry. The optimal value of the cost function is divided by one and a half order of magnitude (divided by 36) which is consistent with the analytic expression of (33).

Optimization w.r.t. to orthotropic linear elastic parameters
The goal is to improve this geometry with respect to its sensitivity to orthotropic linear elastic constitutive parameters, namely the longitudinal Young modulus E 1 , the transverse Young modulus E 2 , the shear modulus G 12 and the Poisson ratio ⌫ 12 . Similarly to [21], we impose Dirichlet boundary conditions and do not use information other than the displacement field. As a consequence, the Young modulus and the shear modulus cannot be identified as such. Therefore, the sample is optimized with ratios E2 E1 , G12 E1 , used together with ⌫ 12 , as constitutive parameters of interest in this example. The strain constraint is expressed as in (27) with " max = max k (" eq (s e 0 )) k .
First optimization phase. The optimization leads to the geometry shown in Figure 9. The optimal shape is superimposed on the initial shape and the deformed morphing box (along with the optimal position of the control points) is also represented in red. It is interesting to see that the size of the holes has increased without increasing the maximum equivalent stress. In fact, the area of stress concentration has expanded. Figure 10 shows the evolution of the cost and constraint functions.
We can see that it is possible to improve the cost function value by a factor 10 without reaching higher equivalent stress values. This means that the uncertainty on the worst identified parameters is lowered by an order of magnitude and that the same experimental set-up as with the initial specimen can be used, which is of great interest to make the most out of the experiment with regard to constitutive parameters identification. Figure 9: Improved geometry (in black) compared to the initial geometry (in grey), and 1D-FFD morphing box and its control points position at the end of the optimization (in red).
Furthermore, it can be noticed that at the end of the optimization process, the Jacobian constraint is active for some Gauss points, see Figure 10. However, this is not due to purely geometrical issues such as loops on the edges, but it comes from the mesh modification that propagates the edges deformation to the rest of the FE nodes (mesh morphing step). Indeed, when the edges deformation is too large, using an elastic morphing can lead to flipped elements, even if the specimen edges remain physically sound (see Figure 11) .   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 Second optimization phase. As mentioned earlier, a solution is then to remesh the interior of the specimen at the end of the optimization process, and to launch another shape optimization on this new mesh. By keeping the same nodes as for the old mesh on the feature edges, in particular for the optimized holes, we can keep the same FFD matrix C FFD as for the first optimization process. Hence, only C m is to be re-assembled. The results obtained after the second optimization step are shown in Figures 12  and 13. With the new mesh, it is possible to reach higher free form deformations of the holes and obtain larger holes. What stops the optimization are bounds that were arbitrarily chosen for the design variables. During this step, the cost function value is reduced by a factor 3, which brings the overall reduction factor up to 30. Figure 12: Improved geometry (in black) compared to the initial geometry (in grey), and 1D-FFD morphing box and its control points position at the end of the optimization (in red).
(a) Initial sensitivity to E 2 E 1 . (b) Initial sensitivity to G 12 E 1 . (c) Initial sensitivity to ν12.
Step 2 sensitivity to G 12 E 1 . (i) Step 2 sensitivity to ν12. Figure 14: Sensitivity fields magnitude for each constitutive parameter, initially and at each optimization step.

A very ill-posed problem
To illustrate the ill-posedness of the problem, the algorithm is started from two different initial shapes. We consider again the orthotropic linear elastic open hole tensile test of the previous section and consider two slightly different initial positions of the holes. Figure 15 presents the evolution of the objective function during the convergence of the algorithm for both initializations along with the evolution of the corresponding shapes. It can be noticed that the two different initial guesses lead to signifi- cantly different optimized shapes. However, the objective functions of the optimized shapes is of the same order of magnitude, which means that they are two different options which both significantly improve identification. This illustrates the presence of a possibly large number of local minima. This property strengthen our choice to 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 consider reduced and regular design spaces in such problems instead of too large and too generic design spaces like FE-based or topology optimization. Resorting to global optimization could be an option. From an engineering point of view, we think that considering the very same algorithm with multistart could be interesting as it may provide a set of equivalently optimal shapes, that the engineers could select based on other criteria (for instance machinability).

Conclusion
In this paper, a methodology has been proposed to improve the sensitivity of a test to chosen constitutive parameters through shape optimization of the specimen. Our approach relies on non-invasive spline tools to enable a FE formalism to be retained and yet be limited to regular, manufacturable designs. It is also meant to fill the gap between the existing approaches in terms of design space halfway between topology and parametric optimization. Indeed, the FFD-based reduced basis approach developed herein allows working with few design variables, yet keeping a rich search space that leads to a vast range of geometries, provided that their topology is the same. We placed ourselves in the case where a pre-design of the specimen is available and the goal is to improve its geometry, with respect to the sensitivity to constitutive parameters for given boundary conditions. Overall, the developed solution strategy consists in a non-invasive modern CAD-based shape optimization that could obviously be applied successfully to other optimization problems (such as structural shape optimization).
To define the optimization problem, we started with the cost function formulation of Bertin et al. [20], which makes sense from experimental and mathematical points of view. We improved this objective function including information coming from the camera modeling, in order to tackle possible size changes throughout the optimization process. Furthermore, material sensitivity fields, which are the derivatives of the displacement field with respect to constitutive parameters, were computed analytically, i.e. without using Finite Differences methods. Constraint functions were also added to ensure that the obtained geometry is physically sound. The first one involves the Jacobian of the FE mesh transformation from the reference elements to the physical element and ensures that no element is flipped. The second one sets a maximum equivalent strain, which is kept equal to the initial maximum equivalent strain in order to guarantee that the cost function decrease could not be impacted by the boundary conditions values. Let us note that these modeling ingredients could be applied to other optimization solution strategies (independently of the definition of the design variables), which should improve the obtain specimen shape in any case.