An unstructured immersed finite element method for nonlinear solid mechanics

We present an immersed finite element technique for boundary-value and interface problems from nonlinear solid mechanics. Its key features are the implicit representation of domain boundaries and interfaces, the use of Nitsche’s method for the incorporation of boundary conditions, accurate numerical integration based on marching tetrahedrons and cut-element stabilisation by means of extrapolation. For discretisation structured and unstructured background meshes with Lagrange basis functions are considered. We show numerically and analytically that the introduced cut-element stabilisation technique provides an effective bound on the size of the Nitsche parameters and, in turn, leads to well-conditioned system matrices. In addition, we introduce a novel approach for representing and analysing geometries with sharp features (edges and corners) using an implicit geometry representation. This allows the computation of typical engineering parts composed of solid primitives without the need of boundary-fitted meshes.

A major difficulty of non-body-fitted methods is the accurate integration of the arising volume and surface integrals. Here, we make use of a tessellation concept which allows to incorporate standard, Gauß quadrature schemes. In the course of this development, a technique is presented which enables the representation of sharp domain features by performing constructive solid geometry (CSG) modelling directly on the embedding mesh. This approach poses a clear advantage in comparison to the conventional methods of geometry resolution because these sharp features are accurately reproduced and not chamfered even on coarse meshes.
Another pitfall of immersed finite element methods is the loss of numerical stability in cases where the intersection of a shape function support with the physical domain becomes very small. This issue has been successfully addressed in the context of b-spline finite elements [6,12,15]. In this work, we build up on this concept of constraining critical degrees of freedom and apply it to Lagrangian basis functions on unstructured meshes. Note that Burman et al. [16] introduced an alternative approach, the so-called ghost-penalty stabilisation method, which is based on an augmented bilinear form. Strongly related to stability are the method's parameters and we show how to choose these parameters in the context of the introduced stabilisation techniques.
The method we present here is based on our previous works [6,[17][18][19] and related to [2,3,16,20,21]. Although, as shown in the cited works, the method can be transferred to many physical applications, we focus on the problem class of nonlinear elasticity.

Weak enforcement of boundary and interface conditions
At first, we present the derivation of the proposed immersed finite element method as applied to boundary value problems from nonlinear solid mechanics. Using a weighted residual technique, we obtain the weak form of the problem and give its linearisation. Similarly, the expressions for material interface problems are subsequently derived.

Boundary value problems of nonlinear solid mechanics
Consider the boundary value problem for nonlinear elasticity in the reference domain ∈ R n d , n d = 2 or n d = 3 ( Fig. 1) n Ω Ω \ Ω where u denotes the unknown displacement field, P is the first Piola-Kirchhoff stress tensor, Div is the divergence with respect to reference domain coordinates, and t(u) = P(u)n, with n the outward unit normal vector to , is the boundary traction. The prescribed data are the volume force f , the prescribed displacementū and the prescribed tractiont. The boundary of , denoted by , is composed of disjoint sets, the Dirichlet boundary D and the Neumann boundary N , where the respective data are given.
In order to construct a weak form of the boundary value problem (1), a weighted residual approach is taken with the test function v. In mathematical terms, we operate with the Sobolev space H 1 ( ), i.e. the vector fields whose components are all in H 1 ( ), see, among others, [8] for the precise definition. Different from conventional FEM, we do not employ a constrained subspace with essential boundary conditions. The weighted residual method thus becomes: with Here,Ė denotes the variation of the Euler-Green strain tensor (E = 1 2 (F F − I) with the deformation gradient F ) and S = F −1 P the second Piola-Kirchhoff stress tensor [22,23]. In the applications section, we work with a compressible Neo-Hooke material with given energy density W (E) and for this hyperelastic case, the stress tensor becomes Using a Newton method to solve the nonlinear Eq. (2), the k th iteration takes the form DR(u (k) , v)[ u] = −R(u (k) , v) and u (k+1) = u (k) + u , where D(·)[ u] denotes the derivative in direction of the increment u, which reads with Da(u, v)[ u] = (Grad v):Ĉ(u) : (Grad u)d .
In this expression,Ĉ denotes the effective elasticity tensor [22]. For simplicity, it is assumed here that the prescribed volume and surface forces, f andt, are independent of the displacement u (dead load case). If these assumptions do not hold, the directional derivative of R(u, v) contains the derivatives of the applied force and traction terms. So far, expressions (2) and, consequently, (5) do not take into account the displacement boundary condition u =ū on D . Therefore, the approach initially introduced by Nitsche [7] is adapted here and the following two terms are added to (5) − D Dt(u)[v] · ( u −ũ)d and γ D ( u −ũ) · vd (8) with the predictorũ =ū in the first iteration (or the appropriate value in the first iteration of every load step) andũ = 0 otherwise, corresponding to a displacement-controlled Newton method. The scalar γ > 0, necessary for numerical stability, is discussed in "Numerical stability" section. In summary, the Newton step (5) including Nitsche's approach to incorporate displacement boundary conditions reads where the iteration counter has been omitted for sake of legibility. The added terms (8) are zero for the exact solution and therefore the method is consistent by construction. Moreover, for hyperelastic materials expression (9) is symmetric and positive for the right choice of γ and non-softening material behaviour, see "Numerical stability" section. In the following, we abbreviate (9) by

Material interfaces
The formalism presented above for the weak incorporation of displacement boundary conditions can be generalised to interface problems, see also [2,3,13]. For simplicity, let the reference domain be composed of two subdomains, = 1 ∪ 2 , and let us ignore the Dirichlet boundary conditions on ∂ . The treatment of such conditions is here essentially the same as in "Boundary value problems of nonlinear solid mechanics" section. We focus only on the conditions imposed on the material interface = ∂ 1 ∩ ∂ 2 , see Fig. 2. In each subdomain i the local equilibrium reads Let u denote the compound displacement field, such that u| i = u i , and define the compound test function v similarly. Moreover, for any compound function g, with g| i = g i , the jump across is denoted with n Ω 1 Ω 2 Γ Fig. 2 Interface problem. Rectangular domain composed of two subdomains 1 and 2 with common interface For later use, also a weighted average {g} is defined on as where 0 ≤ β ≤ 1 is some weighting parameter yet to be discussed. On the interface the conditions are with prescribed jump functions u and t . These conditions represent the jump in the solid displacements and the traction equilibrium across the interface. In more complex situations, such as soft interfaces, a cohesive law can be imposed relating the interface traction t to the displacement gap u , see [24]. Here, we assume that u and t are prescribed and that they are independent of the displacement u. Note that in the evaluation of the tractions t i the unique normal vector n = n 1 = −n 2 as shown in Fig. 2 is used, where this choice is arbitrary. Repetition of the steps as in the single-domain problem above yields the weighted residual method for interface problems Now the integrand of the interface term is rewritten as follows employing the average term (13) and the interface conditions (14) 2 . Using a Newton method to solve the nonlinear problem (15) with (16) requires the directional derivative The interface condition (14) 1 is now incorporated by adding terms akin to (8), namely to expression (17). Again, the parameter γ > 0 is yet to be discussed in the Appendix A and we use the predictorũ = u in the first iteration and zero afterwards. In summary, a step in a Newton iteration to solve the coupled interface problem reads i=1,2 Only the first two terms on the right hand side remain in the case of u = 0 and t = 0. As before, this expression is represented by the equation A(u; u, v) = (u; v), see (10), and we postpone the discussion of the parameters β and γ to Appendix A.

Finite element discretisation
The linearised weighted residual Eqs. (9) and (19) form the basis of a finite element discretisation. To this end, a domain of a simple shape, typically rectangular, is defined such that it fully contains the reference domain . The following finite element discretisation is based on a triangulation of instead of a geometry-conforming mesh of itself (see Fig. 3). We use piece-wise polynomial basis functions ϕ I (x) and write for the approximated displacement field There is no constraint on the chosen finite element space, but if the surface overlaps with the boundary of the embedding domain (that is if = ∩∂ = ∅), it can be more convenient to use an essential treatment of displacement boundary conditions [8] along this boundary. On the other hand, if non-nodal basis functions (like, for instance, higher-order b-splines) are used as the finite element basis, the above presented weak incorporation of the boundary conditions works perfectly well on this boundary part too. Let the support of the basis function ϕ I be denoted by supp(ϕ I ). Now all coefficients u I from the approximation (20) are discarded a priori if supp(ϕ I ) ∩ = ∅. By S we denote the set of the indices of the remaining coefficients and thus {ϕ I } I∈S forms the full basis of the immersed finite element method. This basis is in general not stable [25] and requires further attention, which is given in "Numerical stability" section. Using the approximation (20), we reach the final system of equations with the matrix and vector coefficients b[I n d + a] = (u; e a ϕ I ) for the zero-based indices I, J ∈ S, and using the coordinate directions 0 ≤ a, b < n d (n d being the spatial dimension of the problem) and Cartesian unit vectors e a . Although this immersed finite element method seemingly leads to the same type of linear system as a conventional, geometry-conforming FEM, there are technical differences which will be discussed in the following: the representation of the boundary or interface , the quadrature of elements traversed by this boundary, and the stabilisation of the basis for Above expressions hold analogously for interface problems. The main difference is that the two fields u 1 and u 2 are approximated in fashion of (20) independently on the same background mesh of which encompasses both sub-domains 1 and 2 . Consequently, the elements which are traversed by the material interface approximate both fields since the FE shape functions of the entire element are used even though the fields are only defined up to the interface on their respective side of the domain. Using two sets of shape functions on these elements allows us to represent a discontinuous derivative of the FE solution and can thus be compared to the element enrichment of XFEM [11]. A good illustration of this implementation detail can be found in [2].

Signed distance functions
The weak forms introduced in "Weak enforcement of boundary and interface conditions" section allow us to work with a finite element discretisation which is independent of the geometry, but still the volume and surface integrals, (·)d and (·)d , need geometry information. To this end, we classify the elements (for instance the quadrilaterals in the right picture of Fig. 3) by their location with respect to the physical domain . If τ I denotes any such element, we have the three cases: 1 τ I ∩ = ∅, the element is completely outside of and can be ignored, 2 τ I ∩ = τ I , the element is completely inside and its treatment is straightforward as in any geometry-conforming FEM, 3 τ I ∩ = ∅, the element is traversed by the domain's boundary and requires special consideration.
Note that elements adjacent to the boundary of the embedding mesh (for instance the left or bottom boundaries in the right picture of Fig. 3) technically fall into the third category, but do not pose any difficulty apart from the identification of the element faces which lie on that boundary. For above classification it is sufficient to have an oriented representation of the surface = ∂ . Therefore, the surface is either closed or assumed to be extended beyond the boundaries of . Here, we assume that is either given analytically or is approximated by means of a surface mesh composed of surface elements σ J , In order to avoid the tedious task of intersecting volume elements τ I with surface elements σ J , an implicit geometry representation is introduced. Therefore, the signed distance function [26] is used which is defined as In case of interface problems as introduced in "Material interfaces" section, the above definition of s(x) refers to 1 and 2 instead of and its complement \ . If is represented by a mesh h , the signed distance function dist h with respect to this mesh is used instead. Moreover, only a piece-wise polynomial approximation of this function is used where ϕ K are the nodal finite element shape functions (not necessarily the same as in the approximation (20)) and the coefficients d K represent the value of the signed distance at the finite element nodes x K . The representation (23) can be of higher polynomial degree, given by NURBS patches [1,14,27] or subdivision surfaces [28,29]. But the computation of the coefficients d K in (25) and the quadrature described below are non-trivial tasks if the σ J have a degree higher than linear simplex elements (straight lines in two or flat triangles in three dimensions). In that case, the computation of the distances d K requires the solution of nonlinear equations, see, for instance, [30]. In the rest of this work, the σ J are always linear (n d − 1)simplex elements. Moreover, once only piece-wise linear elements are used for the surface representation (23), the optimal convergence rate of any higher-order method is impeded by this geometry approximation error, see [8]. Figure 4 shows a two-dimensional example where the boundary is composed of three parts: 0 is the part of the boundary of that coincides with the box boundary ∂ and does not require any special attention; 1 and 2 are separated parts which are immersed in the background grid. For the computation of the distance function dist , it is convenient to treat 1 and 2 separately as shown in the figure. The final distance function is then composed as the minimal value of these distances, See also [31] for arithmetic with distance functions. Figure 4 shows the iso-curves of the individual distance functions dist i as well as of the composite function dist . The extension of this approach to a larger number of immersed surfaces is straightforward.
Once the function dist has been determined, the above classification of volume elements τ I is carried out by means of the nodal values d K of the distance function: if all d K of the element τ I are strictly positive (negative), the element is inside (outside) of the domain. If a change in sign of the d K occurs, τ I is traversed by the immersed boundary .
It remains to outline how the coefficients d K for a given surface are computed. In case of an analytic surface representation by an implicit function, these coefficients are calculated directly. In case of an immersed surface mesh, one needs to find the surface element σ * K which contains the point x * K closest to x K , see for instance [32] for such basic primitive tests as the closest point on a triangle to a point. With the knowledge of the closest element σ * K , it can be decided if x K lies on the positive or the negative side of this element in order to determine the sign s(x K ) as defined in (24). This decision is based on the premise that the surface mesh is well oriented. Note that, when the closest point falls on an edge or a vertex, ambiguities can arise for the decision if a point is inside or outside the surface mesh [26], see the case shown in Fig. 5.
At the acute corner in the figure, the region of points whose closest point is the vertex B, is delimited by the outer cone. For all points in this cone, σ 1 and σ 2 are possible choices as closest surface element. The cone contains the region 'a' in which the points are all outside with respect to both elements. The points in region 'b' are outside with respect to one of the possible closest surface elements and inside with respect to the other. Hence, for this region the mentioned ambiguity can occur. One solution to this problem is to introduce angle-weighted vertex normal vectors [26], but this requires extra data structures. Here we choose the simpler approach shown in Fig. 5: the point x K has a larger distance to the extension plane of σ 2 than to the extension plane of σ 1 . This distance is given by the inner product of the element normal vector and the distance vector between the considered grid point and the closest surface point (here, B). Choosing the element with a larger value of this distance resolves the ambiguity. The method is also used in three dimensions with the only difference being a larger set of candidates as closest elements.
Finally, we consider the numerical complexity of the distance function computation. If there are N elements in the surface mesh and N nodes in the volume mesh, a brute- 5 Signed distance computation in the region of an acute corner. The grid point x K has the vertex B as closest point on the surface, but it lies on opposing sides with respect to the adjacent surface elements σ 1 and σ 2 ; based on the larger distance to the tangent planes of the surface elements σ i , the element σ 2 is chosen to determine the outside position of the point x K force approach requires N × N closest point computations. In many cases, this number can be substantially reduced by precomputing a bounding box [32] of the surface h and assigning a default value for the d K of nodes outside of this box, but the essential complexity remains of order O(N × N ). Complexity reduction is possible by generation of a hierarchy of bounding boxes [32] or using so-called marching methods, see e.g. [33].

Constructive solid geometry modelling
Now, we consider a different approach for integrating finite element analysis with geometric design, similar to the ideas presented in [34]. Specifically, we consider the construction of a three-dimensional geometry by means of CSG, see, for instance, [35,36]. An example of such a modelling process is given in Fig. 6, where one begins with a cube as a workpiece and performs set operations with other geometric primitives until the desired geometry is obtained. These operations are commonly union ∪, intersection ∩, subtraction\and the set complement () . Based on De Morgan's laws [36], it suffices to work with the canonical operations intersection and complement, and represent the other two as compositions thereof, more precisely The conventional finite element approach is to work through such a CSG pipeline, export a geometry representation and use a mesh generation software to create a bodyfitted volume mesh for the numerical analysis. The direct modification for an immersed finite element method is to export a surface representation of the geometry and embed this into the mesh by the methods described in "Immersed finite element method" section. Here, a third way is suggested in which the set operations are directly applied to the embedding (non-conforming) volume mesh. As outlined above, it suffice to provide the complement and intersection operations only. The former is trivially achieved: the use of a signed distance function generates an in-and an outside partition of the mesh, reversing these partitions gives the complement. For this reason, all that need be explained is the intersection operation.
A simple two-dimensional example in Fig   subdivided into two triangles each and then every such triangle is intersected by means of the nodal values of the signed distance function dist 1 [37]. The resulting outcome is the left picture in Fig. 7 where the red square-shaped marks indicate the location of the intersection points.
In the second embedding step, the distance function dist 2 is used which gives rise to the element sets I 2 , O 2 and C 2 . All elements which belong to the outside are directly assigned to the complementary domain On the other hand, all elements of I 1 , which also belong to I 2 , are inside the final domain , hence I = I 1 ∩ I 2 = {τ 20 }. Finally, there are the intersection cases. Elements belonging to C 1 and I 2 (τ 21 ) keep their status and sub-division. Elements from I 1 and C 2 (τ 10 ) are subject to the same decomposition methods as C 1 . It remains to discuss the situation of the elements which belong to C 1 ∩ C 2 ; the ones which are intersected by both boundaries, and in our example of Fig. 7 this is the element τ 11 . In this case, simply the composing triangles are intersected with 2 as if they were elements of their own. Proper categorisation of these simplex shapes defines the final domain and its complement \ , see the middle picture of Fig. 7. The advantage of this approach becomes clear when looking at the right picture of Fig. 7. Shown is the result for the same target domain , but first the composition of the individual distance functions dist i is computed according to expression (26) and then the element intersections are constructed. Clearly, in the right picture the corner is chamfered whereas in the above outlined approach this geometric feature is preserved. This is the distinctive characteristic of the presented idea: by successively embedding the geometry primitives into the mesh, the sharp features at the primitive intersections are preserved. It is important to remark that the boundaries i have been represented exactly in this example, but this is solely owed to the fact that they are straight lines. In the more general situation of curved boundaries, they are again represented on the finite element mesh by piece-wise linear simplex elements. But, even though these surrogate boundaries do not exactly reproduce the given geometry, the here presented approach still allows to represent corners or edges at the intersection locations of the original primitives which lie inside of the finite elements.
The presented method for CSG modelling based on finite element meshes is straightforward to extent to three dimensions. In the plane case outlined so far, the rectangular elements are subdivided into two triangles which themselves are triangulated in order to recover the implicit surface in the form of triangle edges. This approach is akin to a two-dimensional version of marching cubes and in three dimensions we make use of a similar technique. The used three-dimensional element shapes are either tetrahedrons or hexahedrons. Figure 8 shows how a hexahedron is decomposed into six tetrahedrons such that it remains to consider this shape only. Given a tetrahedron with values of the signed distance function at its vertices we can classify the cases shown in the right part of the figure. Based on linear interpolation along the edges the zeros of the distance function are recovered and give rise to two volume tessellations τ − I and τ + I whose common faces form the triangulated surface σ I . Using these decompositions, the above outlined intersection operations of two geometry primitives can be carried out analogously in three dimensions.
Alternative approaches for increasing the quality of implicit geometry representations in the vicinity of sharp features (such as edges and corners) exist. In [38] the operations of surface reconstruction by means of marching cubes and the distance function computation are combined in order to generate a so-called directed distance field allowing for a better resolution of surface features. On the other hand, enriched distance functions are presented in [39] where additional edge and vertex descriptors augment the distance geometry representation. Although both approaches are promising concepts in the context of immersed finite element methods, they are not further considered in this work.
We conclude this paragraph by noting that the here used tessellation techniques also help to construct numerical integration schemes for the elements that are traversed by the boundary or interface. The cut elements are general polytopes for which quadrature rules are not easily obtained. There are many techniques that address this problem, such as moment-fitting [40], surface-only integration [41], and adaptive decomposition of the integration region [14,42]. But since we have a tessellation in simplex shapes already available, we use composite Gauß type quadrature rules, see e.g., [11].

Numerical stability
Up to now, it has been shown how to derive an immersed finite element method for boundary value and interface problems, see (9) and (19), and how to compute the matrix coefficients of the linear system of equations. But the stable solution of this final system of Eq. (21) remains to be discussed, especially in view of the method's parameters γ (for boundary value and interface problems) and β (for interface problems only).

Sources of instability
As an illustrative example, consider a one-dimensional problem for the domain = (0, x ε ) and a constant material parameter α. The boundary conditions are a prescribed value of u = 0 at the left end and either a zero derivative (homogeneous Neumann) or a zero function value (homogeneous Dirichlet) at the right end. Let = (0, 2h) be the embedding domain and two linear finite elements of size h are used for the discretisation, see Fig. 9. First, we consider the case with a homogeneous Neumann boundary condition at the right end. The left-side boundary condition is going to be incorporated essentially and the system matrix becomes Obviously, for ε → 1 this matrix recovers the standard finite element matrix for this problem with its known properties. The eigenvalues of this matrix have the values Clearly, the smaller eigevalue goes to zero for the limit ε → 0, that is the case of a vanishing cut element. As expected, the matrix K N is ill-conditioned for this limit.
We now turn to the Dirichlet case and evaluate the left-hand-side of expression (7) for this simple test problem. The resulting stiffness matrix has the form (replacing the surface integrals by point evaluation at x ε ) Note that the expression for the system matrix has been multiplied by the factor h α . The expressions of the eigenvalues of K D are not easily determined, but the condition det(K D ) > 0 is more workable. Note that since the trace of the matrix is positive and equals λ 1 +λ 2 , the condition of a positive determinant (recall det(K D ) = λ 1 λ 2 ) is sufficient for positive definiteness. One gets ϕ1 ϕ2 Fig. 9 One-dimensional test example. Two linear finite elements with the right boundary inside the second element Fulfilment of this condition guarantees that the matrix is positive definite for a fixed mesh size h, but unfortunately it implies γ → ∞ for ε → 0. The use of a very large value for γ can lead to undesired numerical problems.
In the case of the interface problems and formulation (19), the situation is slightly better. The extra parameter β can be adjusted in a smart way such that a finite value of γ is always achievable. Such a choice is proposed in [43] where β depends on the material parameters of the subdomains and the sizes of the cut elements, |τ I ∩ i |. Using this approach, the system matrix has always positive eigenvalues (for the considered problem class) with a finite value of γ . Nevertheless, the minimal eigenvalue goes to zero for vanishing sizes of the cut elements. Even though the parameter choices by [43] show a good performance in terms of the quality of the numerical results, the matrix condition number still cannot be bounded for a fixed mesh and arbitrary interface locations.

Stabilisation
The above indicated sources of numerical instability all stem from the same situation that for some degrees of freedom, the intersection of the support of their associated shape functions with the physical domain becomes very small, where supp(ϕ I ) denotes the support of shape function ϕ I and h is a measure of the mesh size on . In all above cases, Neumann, Dirichlet, or interface problem, this leads to severe ill-conditioning of the final system matrix. To solve this problem, the following approaches have been proposed, among others, S-1 Discarding all degrees of freedom with support intersection below a certain threshold, s I < εh; S-2 Adding a face-based stabilisation term [16]; S-3 Constraining degenerate degrees of freedom [6,12].
As reported, among others, in [12], the approach S-1 leads to a loss of approximation order. Although appealing due to its simplicity, this drawback can be prohibitive in some applications. An ad-hoc approach to remedy the stability problem is to locally adapt the finite element mesh in order to avoid the problem of too small values of s I . Even though simple at first sight, a robust realisation of this idea in three dimensions is not straightforward and mesh entanglement needs to be avoided. The support size s I is increased if specific nodes are moved away from the surface , but there is an interesting alternative in which the points are snapped to the surface thereby generating a conforming mesh, see [44] for two-dimensional analysis of this idea.
Another approach, S-2, is proposed in [16] where the jump of the function gradients across certain element faces is added to the weak form in order to guarantee stability of the method. Other than the result (31), the system matrix stays well-conditioned for small values of γ in the limit ε → 0. This approach requires to evaluate surface integrals over interior mesh faces, a technicality which requires additional data structures in many codes, but does not add much to the overall difficulty of implementing an immersed finite element method. Nevertheless there is a drawback with this approach, since it introduces another weighting factor whose adjustment is not straightforward: for too small values of this factor the stabilisation effect disappears and for too large values the method's accuracy is affected [16].
Finally, we consider S-3 which relies on the concept of coupling degrees of freedom with too small supports to other degrees of freedom from the interior of the domain. In order to outline this approach, the degrees of freedom shall first be classified according to the size s I of the intersection of their support with the domain, as defined in (32). In "Finite element discretisation" section, the set S has been introduced which contains all indices of shape functions for which s I is larger than zero. Introducing a thresholdŝ, the set S is now decomposed into the disjoint index sets, A and B with definition A = {I ∈ S : s I ≥ŝ} and B = {I ∈ S : s I <ŝ} .
The thresholdŝ used in this classification has to depend on the mesh size h and should not be larger than one typical element size. The basic idea of Höllig et al. [12] is to constrain degrees of freedom from the set B to suitably chosen degrees of freedom from A(J ), a subset of A, where the coefficients c IJ will be discussed further below. These constraints give rise to the modified shape function basis In this reordering of the finite element approximation (20) a new set B(I) is used which contains all indices J from B, such that I ∈ A(J ). For the implementation of this stabilisation method, it is sufficient to work with expression (34), but the result of (35) demonstrates that effectively a modified shape function basis {φ I } I∈A is generated and illustrates the notion of extended splines as given in [12]. Note also that B(I) = ∅ for all degrees of freedom that are not in the vicinity of the boundary and in that caseφ I = ϕ I , so that most shape functions are not affected. Since the support size of the basis functions ϕ I is larger, the bandwidth increases for these degrees of freedom. Therefore, it has to be remarked that only degrees of freedom in the vicinity of the boundary are affected. Moreover, the storage requirement of the final system matrix is of course not larger than it would be for the original (unstable) basis functions ϕ I .
There are two open questions when using this approach: (i) the choice of the index set A(J ) associated to J and (ii) the values of the constraint weights c IJ . The origin of this approach, as introduced in [12], is to stabilise b-spline discretisations. In this particular situation, the underlying mesh is logically Cartesian and an explicit expression of the coefficients c IJ can be given as a function of the multi-indices used to label that grid. See also [6] for a more intuitive interpretation of the arising extrapolation of Lagrange polynomials and its efficient implementation. The aim of this stabilisation procedure is to maintain the convergence order of the method and therefore to not lose the polynomial approximation quality of the approximation (20) due to the constraints (34). In other words, the modified basis functionsφ I introduced in (35) have to represent the same polynomials as the ϕ I themselves.
In order to outline the procedure for obtaining A(J ) and the corresponding coefficients c IJ , consider the situation depicted in Fig. 10. The degree of freedom u J , J ∈ B, resides at node x J and the size of the intersection of the support (hatched in the picture) with the domain is below the thresholdŝ. Searching through the elements in the vicinity of x J , one finds the element τ K (J ) whose connected degrees of freedom all belong to A. Any element entirely inside the domain fulfils this condition. Normally, many such elements can be found and the closest is selected, where the distance between the element middle point and x J is a possible way to measure the proximity. The selected element τ K (J ) gives rise to the index set A(J ) ⊂ A associated with u J . Formally, we can write Once this set is defined, the weights c IJ are calculated by evaluation of the basis of τ K (J ) at the node x I , This choice of weights is an extension of the idea given in [15] where the weights are defined for non-uniform b-splines as dual functionals applied to the polynomials in a chosen grid element. Here the point evaluation of (37) is the corresponding dual functional of Lagrange polynomials [8]. Note that x J / ∈ τ K (J ) and thus c IJ represents an extrapolation of the polynomial basis spanned in τ K (J ) to the outside point x J , see also Fig. 10. The stabilisation procedure can be summarised as follows 1 categorise A and B using a thresholdŝ, see (32) and (33) 2 for all J ∈ B • find τ K (J ) with all degrees of freedom from A that is close to x J , • define the constraint coefficients as c IJ = ϕ I (x J ) for all I ∈ A(J ) 3 assemble the final system of equations using the constraint equations (34) applied to test and trial spaces 4 after solving the global system, calculate the constrained degree of freedom u J with J ∈ B according to (34). Fig. 10 Cut-element stabilisation on a triangular mesh. Node x J is the location a degree of freedom u J , J ∈ B and the element τ K (J) is used for constraining u J With respect to the implementation a few remarks have to be made. The code has to be able to search the elements in the neighbourhood of a given node. For instance, the element τ K (J ) in Fig. 10 does not lie in the support of ϕ J but in the ring of elements around that support. Theoretically, for very extreme shapes of the nearest τ K (J ) to x J could lie far away, but here we assume that the mesh is fine enough such that there is always an element nearby. Cusp-shaped domains are excluded from the onset. In addition, one has to evaluate the shape functions of τ K (J ) at x J and this requires to find first the reference coordinate ξ J (outside of the reference element) such that the geometry representation of the chosen element represents x J when evaluated at this coordinate, that is Here we restrict ourselves to meshes in which all elements are an affine transformation of the reference element. Higher-order geometry representations of the volume mesh are excluded, but they are also not necessary since the mesh, by design of the immersed method, need not conform to the geometry of .

Numerical examples
At last, a few numerical examples are presented in order to study and demonstrate the performance of the immersed finite element method as presented here. Unless indicated otherwise, the spatial discretisation of all problems is carried out with linear finite elements. As shown in the appendix, the Nitsche parameter is chosen as γ = γ 0 α h with the mesh width h, the representative material parameter α and a dimensionless scalar γ 0 . The default choices for this parameter is γ 0 = 10 and for interface problems the additional parameter is chosen as β = 0.5. The thresholdŝ used to distinguish between the degree of freedom sets A and B in the stabilisation of "Stabilisation" section is set to the size of one element.

Convergence and robustness analysis
At first, the method's performance under variation of various parameters is assessed. For this purpose, an essentially one-dimensional Poisson problem is used as depicted in Fig. 11, left, with a forcing function f (x) = α 2 sin(αx 1 ) and α = 3π 2x δ . The resulting exact solution is then u(x) = sin(αx 1 ). The signed distance function is dist (x) = x δ − x 1 and the boundary is represented exactly. At first, this problem is analysed using a structured mesh as shown in the figure. Figure 11, on the right, shows the analytic solution (solid values of u and

Fig. 11
One-dimensional problem. Setup (left) and a comparison of exact with approximate solution and its derivative for x δ = 6 7 (right) black line) along the x 1 -axis and its derivative (dashed line) for a boundary location at x δ = 6 7 . The approximations u h and ∂ 1 u h for a mesh with 5 × 5 elements (red) and a 10 × 10 element (blue) are also displayed. One can see that the numerical approximation u h coincides with the analytic solution at the finite element nodes and, moreover, at the boundary location at x δ .
The convergence of the method is shown in the left of Fig. 12, where the numerical errors in L 2 -norm and H 1 -seminorm are shown for an approximation with linear and a quadratic Lagrange polynomials. These results exhibit the expected optimal convergence rates [8]. In this graph, the boundary location is held fixed at x δ = 6 7 and the mesh width h is decreased. On the other hand, the right side of Fig. 12 shows the smallest and largest eigenvalues of the system matrix in dependence of the boundary location for a nonstabilised implementation and for the stabilisation presented in "Stabilisation" section. Here, a fixed 40 × 40 mesh is used and the location of the boundary is at x δ = (34 + ε)h with the parameter 0 ≤ ε ≤ 1. A Neumann boundary condition at x δ is considered and, hence, one has always a(u h , u h ) > 0 and λ min > 0. One can clearly see that λ min ∈ O(ε) for small ε and for the non-stabilised case (note that the figure shows in fact the inverse 1 λ min ). Clearly, the matrix condition number grows without bound. The stabilisation as outlined in "Stabilisation" section, however, guarantees a constant value of λ min well above zero. The largest eigenvalues coincide for both cases. Now we turn to the problem with a Dirichlet boundary condition at x δ . Using the same variation of the location of this boundary as above, Fig. 13 shows the smallest eigenvalue λ min for the stabilised method and for the non-stabilised method for various values of γ 0 (recall that γ = γ 0 h ). One can see that without stabilisation the considered minimal eigenvalue changes sign for decreasing values of ε rendering the system matrix indefinite (and singular when the zero is crossed). In order to force λ min > 0 one can increase the value of γ 0 , but for ε → 0 this value grows without bound and one gets effectively λ max → ∞ which likewise deteriorates the condition number of the matrix, as already discussed in "Sources of instability" section.
Next, the stabilisation technique is applied to an unstructured mesh as shown in Fig. 14. Note that this case is not covered by the original idea of this technique as given by [12] which was only designed for b-spline basis functions on structured meshes. The left of Fig. 15 shows the convergence of the stabilised method for linear triangle elements and Finally, the location of the boundary is varied again and the condition number for a Neumann problem is considered in the right of Fig. 15. Whereas in the non-stabilised case this value shows a very erratic behaviour with large peaks, the condition number for the stabilised method is almost constant at a low value. Now, an interface problem is considered. Figure 16 shows the computational domain that is composed of a circular domain 1 embedded in a square domain 2 . On this The geometric parameters are chosen as R = 0.75 and L = 2, respectively. This problem together with its analytic solution is taken from [2]. In the right graph of Fig. 16 the convergence behaviour is shown for different values of the interface weight factor β. For the three considered values β = 0, 0.5 and 1, the curves are indistinguishable. Also optimal convergence rates are achieved for mesh sizes smaller than h ≈ 0.02. At last, we consider the influence of the geometry representation. As outlined in "Constructive solid geometry modelling" section, we have to approaches available: the use of a signed distance function representing the entire embedded surface and the successive embedding of the geometry primitives that form the final model. For simplicity, consider a square that coincides on two of its edges with the mesh boundary whereas the other two are represented implicitly. Figure 17 shows the effect of the introduced two approaches in the left and middle images, respectively. Clearly, the upper right corner is chamfered off in the first approach, but represented exactly in the second. As a numerical problem we have chosen − u = 1 on a unit square subject to u = 0 on the lower and left boundaries and ∂u/∂n = 0 on the other two boundaries. An analytic solution to this problem is available, for instance in [45] in the context of Poiseuille flow in a rectangular channel. The right graph in Fig. 17 shows the convergence rates for the two types of geometry modelling. Clearly, optimal convergence rates are obtained for both cases. Nevertheless the exact representation of the corner leads to a much smoother outcome with lower approximation errors for coarse mesh sizes.

Mesh-embedded CSG
Here the domain as obtained by the CSG process of Fig. 6 is reconsidered, see also the left of Fig. 18. The embedding domain is = (0, 1) 3 and equipped with a uniform hexahedral mesh. Following the mesh-based Boolean operations as introduced in "Constructive solid geometry modelling" section, the immersed geometry is obtained by 1 intersection with a sphere of radius 0.65 and centred at (0.5, 0.5, 0.5), and 2 successive subtraction of cylinders around the same centre with radius 0.3 and in the directions of the x i -coordinate axes.
Thus the domain is obtained as shown in Fig. 18 and we assume that it is occupied by a hyperelastic solid. In a first analysis, linearised elasticity is assumed and the convergence is studied by using fundamental solution of elasticity U (x, y) (see, for instance, [46]) as an imposed analytic solution with a source point y located outside of the domain. Therefore, on the bottom (x 3 = 0) the boundary displacementsū(x) = U (x, y) are prescribed and the remaining boundaries are subject to the Neumann conditiont = t(U )(x, y). For simplicity, the material parameters are chosen as λ = 28.85 and μ = 19.23. The right of Fig. 18 shows the convergence of the displacement solution and of the computed volume and surface area of the embedded domain. Quadratic convergence is observed for all considered quantities. Next, a compressible Neo-Hookean material model [22,23] with large deformations is used, based on the strain energy density  x 1 , 0). The load is applied in 4 steps and within each step a Newton method is used to obtain the equilibrium state. The deformed geometry for these four load steps is shown in the images of Fig. 19 for a 40 3 grid of linear hexahedron elements.

Composite material
As a last example, the elastic deformation of a fibre-reinforced block of elastic material is considered. A block of dimension L × 2 5 L × L is reinforced by inclined fibres placed with a main axis separation of L 3 . The fibres are represented by cylinders with radius L 15 . The three-dimensional setup is shown in the left of Fig. 20 and on the right a two-dimensional view of the problem is depicted. The bottom surface is held fixed and the top surface is constrained in normal direction. The left and right surfaces are subject to a constant traction fieldt in normal direction. The discretisation is carried out by a fixed mesh of dimension 50 × 20 × 50 as indicated on the back faces of the three-dimensional view. For comparison, we monitor the average horizontal displacement throughout the composite body for a variety of fibre angles −35 • ≤ α ≤ 35 • . Both domains 1 and 2 have the hyperelastic material law according to the energy (38) and  In addition, the analysis of the average horizontal displacement U 1 as a function of the considered fibre angles α is shown for the Neo-Hooke material and linearised elasticity. Although there are similarities between the large-deformation analysis and the linearised version, striking differences can be observed too. Most of all, the linear variant is completely symmetric with respect to the sign of α and has its largest value for α = 0 • . In the large-deformation variant, on the other hand the result is a larger deformation for the negative fibre angles and smaller for positive angles. Overall, the body behaves less flexibly in the nonlinear analysis, but with a strong bias to an increased flexibility for negative fibre angles.

Conclusions
Immersed finite element methods, that do not rely on a body-fitted mesh, are a promising alternative to conventional FEM for many applications. Especially in the case of complex three-dimensional geometries, moving interfaces, or design optimisation such methods allow for more flexible geometry processing and remove the repeated interaction with mesh generation software. Here, we present an immersed FEM for the problem class of nonlinear elasticity, based on a weak incorporation of Dirichlet boundary conditions and interface conditions with Nitsche's method, an implicit geometry representation and accurate integration of the arising cut elements. We place emphasis on the implementation details such as the robust computation of the signed distance function and quadrature by means of tessellation. A common pitfall of non-body-fitted FEM, the loss of numerical stability in situations with degenerate function support, is analysed and we provide a stabilisation technique that is robust without affecting the convergence behaviour. Moreover, the choice of the parameters in the context of Nitsche's method are thoroughly discussed.
We demonstrate a way to incorporate sharp features such as edges and vertices in our method by means of successively embedding the geometry primitives into the analysis mesh in a similar way as in constructive solid geometry modelling. Based on this idea, geometry modelling is directly integrated in the finite element analysis and there is no need for a mesh generation tool. The presented applications emphasise the potential of this approach, where large deformation analyses are carried out based on a trivial Cartesian background mesh.
A present shortcoming of the introduced approach is the restriction to linear approximation orders. Although the field approximation used in this FEM can be of arbitrary order, a gain in convergence order would be impeded by the geometry representation based on linear facets. In principle, the use of more accurate signed distance functions and the subsequent adaptation on the quadrature level to account for embedded higherorder surface representations is feasible.
Finally, we note that the presented method is ideally suited for the incorporation of hadaptivity. A combination of this immersed FEM with hierarchical refinement techniques as shown, for instance, in [47] would render a powerful analysis toolbox, which yields accurate numerical predictions based only on the input of geometry primitives.

Dirichlet boundary conditions
We develop an estimate for the parameter γ appearing in the linearised weighted residual equation (9). The material behaviour is assumed to be such that Da(u, v)[ u] is an elliptic bilinear form. The aim is to show that A(u; w, w) > 0 for any w = 0, where u is the current displacement solution. This gives Here, w D is the L 2 -norm over the Dirichlet boundary D . The Cauchy-Schwarz inequality has been used from the first to the second line and, most importantly, the third line is based on the inverse inequality This type of estimate is also presented in [3] for the case of Poisson's equation. Knowledge of the constant C gives rise to the choice γ > C 2 which renders the last line in (40) positive. Inserting the finite element trial functions (20) into (41) for all coordinate directions e a , 0 ≤ a < n d , and all shape functions ϕ I . Obviously, this condition is only non-trivial if the supports of ϕ I and ϕ J overlap each other and intersect with the Dirichlet boundary D . Therefore, we use the following abbreviations for these domain and boundary intersections IJ = ∩ (supp(ϕ I ) ∩ supp(ϕ J )) and IJ = D ∩ (supp(ϕ I ) ∩ supp(ϕ J )).
The left-hand side of (42) can be re-written as Abbreviating the integrands in (44) and (45) with f aIbJ and f aIbJ , we obtain for the inverse inequality Assuming continuity of the integrands, the mean value theorem of integration states that two points ξ ∈ IJ and η ∈ IJ exist such that the estimate becomes Without assumptions on the shape functions ϕ I and the material behaviour, we cannot further reduce the ratio of the evaluated integrands. Nevertheless, if α denotes a characteristic material behaviour (for instance the bulk modulus), it is clear from (44) and (45) that this ratio scales like α 2 α = α. Here, we focus on controlling the second part of the right hand side of the estimate (47). Using a standard finite element basis, there are constellations in which IJ → 0, whereas IJ does not decrease: see, for instance the sliver test case in [16].
Employing the proposed stabilisation technique from "Stabilisation" section, where critical shape functions are essentially joined with neighbouring ones that have a nondegenerate support, the above geometric ratio can be safely estimated as because the support sizes of the stabilised basis functionsφ I never fall below the stabilisation thresholdŝ which is of order h n d . In conclusion, we can state that there exists a constant γ 0 independent of the material and the mesh size h, such that and, in turn, A(u; w, w) > 0, see estimate (40).

Interface conditions
Next, we assess the choice of parameters in the linearised weighted residual equation (19) for the interface problems. To this end, the steps of (40) are repeated in an analogous manner with the function w = (w 1 , w 2 ) and yield the estimate Using the "Peter-Paul inequality" with some δ > 0 (that is, the estimate 2ab ≤ δa 2 +δ −1 b 2 ) one gets Let C i denote the constant of (41) for the subdomain i , choose δ = ((1 − β) 2 C 2 2 )/(β 2 C 2 1 ), and insert the inverse inequality (41) into the last expression in order to get Based on the discussion above we know that the stabilisation technique given in "Stabilisation" section keeps the values of C i bounded: therefore D according to (53) is well-behaved, independent of the choice of 0 ≤ β ≤ 1. Nevertheless, in [43] a parameter choice for β is presented which keeps the value of D bounded even for a non-stabilised finite element basis. Their choice is, adapted to the notation used here, β = C −1 1 /(C −1 1 + C −1 2 ) and this value controls nicely the right-hand-side of expression (53) for unbounded values of C i (note that in case of geometric constellations for which C 1 is large the counterpart C 2 is small, and vice versa). But it has to be remarked that in certain types of applications (e.g. fluid-structure interaction [48]) it is convenient to freely choose the parameter β without stability restrictions. Moreover, this special choice of β maintains A(u; w, w) > 0 for a finite value of γ , but does not prevent unbounded values of the condition number of the system matrix. With the here presented stabilisation technique, the choice of β does not affect stability and γ = γ 0 (α 1 + α 2 )/h provides a safe choice for the characteristic material parameters α i .