 Research article
 Open access
 Published:
Contact modeling from images using cut finite element solvers
Advanced Modeling and Simulation in Engineering Sciences volume 8, Article number: 13 (2021)
Abstract
This paper presents a robust digital pipeline from CT images to the simulation of contact between multiple bodies. The proposed strategy relies on a recently developed immersed finite element algorithm that is capable of simulating unilateral contact between solids without meshing (Claus and Kerfriden in Int J Numer Methods Eng 113(6):938–966, 2018). It was shown that such an approach reduces the difficulties associated with the digital flow of information from analytically defined geometries to mechanical simulations. We now propose to extend our approach to include geometries, which are not defined mathematically but instead are obtained from images, and encoded in 3D arrays of voxels. This paper introduces two novel elements. Firstly, we reformulate our contact algorithm into an extension of an augmented Lagrangian CutFEM algorithm. Secondly, we develop an efficient algorithm to convert the surface data generated by standard segmentation tools used in medical imaging into levelset functions. These two elements give rise to a robust digital pipeline with minimum user intervention. We demonstrate the capabilities of our algorithm on a hip joint geometry with contact between the femur and the hip bone.
Introduction
In this article, we present a modern digital pipeline for the simulation of the mechanical interaction between solid bodies from CT images. The generation of finite element models from medical or engineering images is a very challenging task. The first step is to identify the set of voxels corresponding to the different solid bodies using image segmentation and/or registration. The regional information of the bodies is then typically transformed into a surface representation of each body, e.g. surface triangulations. From the simulation point of view, the challenge is that this surface mesh representation is of low quality and frequently contains badly shaped triangles which are not suitable for finite element analysis. The importance of high quality meshes in biomedical simulations is highlighted in e.g. [14, 15]. To obtain accurate and stable solutions, the surface mesh needs to be remeshed, and this boundary operation is followed by interior volume meshing. Unfortunately, meshing is errorprone and often requires time consuming manual interventions and corrections, which is highly cumbersome, and requires valuable experts’ time. To address this issue, several approaches are followed in the literature. A first class of mesh mapping methods [33] attempt to automatise mesh generation by deforming reference meshes to fit to specific geometries. Of course, such approaches still require prior meshing work and are limited to relatively small variations around the reference geometry. A second class of approaches aim to eliminate or significantly reduce the reliance on meshing. The most popular instance of these methods is the voxel or imagebased finite element method [20, 26, 31]. In voxel based finite element methods, voxel of the image are directly converted to a hexahedral finite element mesh. Greyscale values of the voxels are then mapped onto material values such as elasticity moduli. These methods reduce the meshing burden considerably, because the computational grid does not need to be aligned with any object boundary. However body boundaries are represented by “staircases”. This causes stress singularities in the finite element simulation, which negatively affects the quality of the results near boundaries. Unfortunately, boundaries are often of key importance to analysts. This is the case in the present contribution, which focusses on the simulation of contact between two elastic bodies. In addition to this drawback of voxelbased methods, one may also find that, depending on the resolution of the image, two bodies may be indistinguishable and merged together into one big block.
In this article, we introduce an alternative approach related to XFEM [4, 19, 28, 30] and CutFEM [6, 22].
We start from surface triangulations generated by a segmentation tool [1]. This surface mesh, which may be of poor quality, is then embedded in a regular and fixed background grid. We now generate signed distance functions for each body, using the regular background grid to represent them discretely. As per usual with levelsetbased methods, the surfaces of the bodies will be represented by the zero contour line of these functions. Zero contour lines may intersect the background mesh in an arbitrary manner. This allows for a sharp and smooth (continuous and elementswise linear in this paper) surface representation which are then used simulate contact and compute contact forces between the elastic bodies.
The background mesh which is used to discretize the unknowns and the level set functions needs to be sufficiently fine in order for the piecewise linear level set function to capture the geometry accurately. For this purpose, we develop and analyse two refinement criteria for adaptive mesh refinement.
The contact problem between elastic bodies will be solved using the CutFEMLaTIn algorithm recently introduced in [12]. In that paper, the versatility of the algorithm was demonstrated, and it was shown that it converges optimally. In the present article, we formally show that the CutFEMLaTIn algorithm is an extension of the augmented Lagrangian CutFEM algorithm presented in [7, 8]. The CutFEMLaTIn method uses ghostpenalty stabilization [5] to ensure wellconditioned system matrices. Alternatives to the ghostpenalty method may be employed, e.g. function extension [17, 23]. The algorithm is implemented in FEniCS [2].
The aforementioned elements, an efficient surface triangulation to levelset geometrical algorithm, adaptive refinement criteria and a stateofthe art contact simulation algorithm for contact over implicitly defined geometries, give rise to a stable and robust digital pipeline from CT images to finite element simulations.
This article is organised as follows. In “Frictionless unilateral contact between two deformable solids” section, we detail the CutFEMLaTIn algorithm used to compute the contact forces between two elastic bodies. In “Digital pipeline from CTscans to CutFEM biomechanical simulations” section, we describe our algorithm to convert CTscans to cut finite element biomechanical simulations for multiple bones including two adaptive refinement criteria. In “Unilateral contact between hip and femur bone” section, we show numerical results for contact between a femur and a hip bone.
Frictionless unilateral contact between two deformable solids
In this section, we will describe our contact algorithm between two elastic bodies which occupy the domain \(\Omega ^1\) and \(\Omega ^2\), respectively. The domains will be described implicitly using two level set functions. We begin our problem definition in the continuous setting in this subsection, before we describe its discretization of the domains and the definition of the discretization of the unknowns in the next subsection.
We assume that each elastic body is only undergoing small deformations and that they interact through frictionless unilateral contact. A contact problem between more than two bodies is possible in our framework and a contact algorithm for a larger number of solids has been introduced in [12]. However, in this contribution, we restrict ourselves to the contact problem between two bodies (see Fig. 1). Let \(\Omega ^b\) denote our total domain which contains our two elastic bodies. Let \(\phi ^1:\Omega ^b \rightarrow {\mathbb {R}}\) and \(\phi ^2:\Omega ^b \rightarrow {\mathbb {R}}\) be the two level set functions that describe the geometry of body 1 and the geometry of body 2, respectively. We then define two domains \(\Omega ^1\) and \(\Omega ^2\) by
The contact interface \(\Gamma ^c\) between the two solids, which is assumed to be of nonzero measure, is defined as
Furthermore, let \(\partial \Omega ^i\) denote the boundary of \(\Omega ^i\), \(i=1,2\) and \(\partial \Omega := (\partial \Omega ^1 \cup \partial \Omega ^2) \backslash \Gamma ^c\) the boundary of domain \(\Omega ^1\) and \(\Omega ^2\) without \(\Gamma ^c\). We further partition the boundary \(\partial \Omega \) into a Neumann part \(\partial \Omega ^t\) and a Dirichlet part \(\partial \Omega ^u\) with \( \partial \Omega ^t \cap \partial \Omega ^u = \emptyset \).
The problem of unilateral contact can be formulated as an optimisation problem under inequality constraints. We look for displacement field \({ \underline{u} }:= \{ { \underline{u} }^1, { \underline{u} }^2\}:\Omega ^1 \times \Omega ^2 \rightarrow {\mathbb {R}}^d \times {\mathbb {R}}^d\) that minimises the potential energy
under inequality constraints
and Dirichlet boundary conditions
Here, \(d=\{2,3\}\) is the spatial dimension, \({ \underline{n} }_{\Gamma }\) denotes the normal on \(\Gamma ^c\) pointing from \(\Omega ^1\) to \(\Omega ^2\), \(u_d\) is a known prescribed displacement and \({\llbracket { \underline{u} }\rrbracket } = { \underline{u} }^1  { \underline{u} }^2\) denotes the jump of \({ \underline{u} }\) across the contact interface \(\Gamma _c\). The inequality constraint (4) enforces a zero interpenetration between the domains \(\Omega ^1\) and \(\Omega ^2\). In the previous expression, the bilinear form a is defined as
where \({ \underline{\underline{\sigma }} }({ \underline{u} }^i) = \lambda ^i \, \text {tr}({ \underline{\underline{\epsilon }} }({ \underline{u} }^i))\, \pmb {I} + 2 \, \mu _i \,{ \underline{\underline{\epsilon }} }({ \underline{u} }^i) \) is the Cauchystress tensor, \({ \underline{\underline{\epsilon }} }({ \underline{u} }^i) = \frac{1}{2} \left( \nabla { \underline{u} }^i + (\nabla { \underline{u} }^i)^T\right) \) is the strain tensor, \(\lambda ^i\) and \(\mu ^i\) are the two Lamé coefficients, i.e. \( \lambda ^i = \frac{E^i \nu ^i}{(1 + \nu ^i)(1  2 \nu ^i)} \), \(\mu ^i = \frac{E^i}{2(1 + \nu ^i)}\), with Young’s modulus \(E^i\) and Poisson’s ratio \(\nu ^i\), \(i=1,2\). The linear form l is defined as
In the following, we set the Poisson’s ratios to \(\nu ^1=\nu ^2=0.3\). Here, \(\underline{f}\) is a given body force and \(\underline{t}_d\) is a given boundary traction.
The primal constrained optimisation problem (3)–(5) can be reformulated as an unconstrained primal/dual problem, which will be the basis for the development of this paper. In this setting, the solution to the contact problem is found by extremising the Lagrangian
with respect to primal variable \({ \underline{u} }\), in the space of fields satisfying the Dirichlet boundary conditions, but not the unilateral contact condition, which is now relaxed and enforced by
We reformulate the primal dual formulation in terms of the Karush–Kuhn–Tucker conditions, which consist of the linear variational form
for all \(\delta { \underline{u} }\) satisfying \(\delta u = 0\) over \(\partial \Omega _u\) and the nonlinear constraint on the contact interface \(\Gamma ^c\)
which can be reformulated as
for all \(\delta \lambda \in L^2(\Gamma ^c)\). Here, \(\gamma \) is a strictly positive scalar and \(\left[ x \right] _{+}\) denotes the positive part of a scalar quantity \(x \in {\mathbb {R}}\), defined as
For a proof of the equivalence of condition (11) and (12) see for example [10, 11].
CutFEM primal/dual solver for the contact problem
In this section, we will describe the discretization of the contact problem, which uses level set descriptions of the geometries in a fixed and regular background grid. The boundary of the elastic bodies will be represented by the zero contour lines of level set functions, which will be allowed to intersect the background mesh cells in an arbitrary manner. Each domain \(\Omega ^i\) will be embedded in their own fictitious domain resulting in overlapping fictitious domains along the contact interface. Each \({ \underline{u} }^i\) will obtain its own set of degrees of freedom in its fictitious domain allowing for a multiple valued displacement \({ \underline{u} }= ({ \underline{u} }^1,{ \underline{u} }^2)\) along the interface. This enables jumps in the displacement along the interface.
CutFEM/levelset approximation of the geometry
Let us introduce a triangulation \({\mathcal {T}}_h\) of the background domain \(\Omega ^b\). We assume a simple shape for \(\Omega ^b\), which is easy to mesh with a regular background grid, e.g. a bounding box which contains both elastic bodies. Furthermore, let us introduce the finite element space of continuous piecewise linear functions, i.e.
We now define a piecewise linear approximation \(\phi ^{1}_{h} \in {\mathcal {Q}}_h\) of level set \(\phi ^1\) and \(\phi ^{2}_h \in {\mathcal {Q}}_h\) of level set \(\phi ^2\), such that for any \(i \in \{1,2\}\), the nodal values of \(\phi ^{i}_h\) coincides with the values of \(\phi ^i\) at the locations of the nodes. We now define the approximate physical domains \(\Omega _h^1\) and \(\Omega _h^2\) as follows:
and the piecewise linear approximation of the interface
Overlapping domain decomposition
Now, we define the set of all elements of \({\mathcal {T}}_h\) that have a nonzero intersection with \(\Omega _h^i\), \(i=1,2\)
The domain corresponding to this set, denoted by \({\hat{\Omega }}_h^i := \bigcup _{K \in \hat{{\mathcal {T}}}_h^i } K \), is called fictitious domain. Furthermore, let us denote all elements which are intersected with \(\Gamma ^c_h\) by
The domain corresponding to this set is denoted by \({\hat{\Gamma }}^{h} := \bigcup _{K \in \hat{{\mathcal {G}}}_h } K \). This definition will be used to extend interface quantities to a band of elements.
Of key importance to CutFEM methods is the careful tailoring of regularization terms to ensure that the system matrix remains wellconditioned. To this end, we define the set of ghostpenalty element edges for fictitious domain \({\hat{\Omega }}_h^i \), \(i \in \{ 1, 2\}\) as
For stabilization purposes still, we further define the set of intersected interface edges
Note our consistent use of notation \(\hat{.}\) to indicate extended quantities.
Fictitious domain and extended interface finite element spaces
We seek the displacement approximation \({\hat{{ \underline{u} }}}_h =\left( {\hat{{ \underline{u} }}}_h^1,{\hat{{ \underline{u} }}}_h^2 \right) \) of the twobody contact problem in the product space
Note that \({\hat{{ \underline{u} }}}_h\) is multivalued. This feature allows for the representation of embedded discontinuities in CutFEM. In all elements that are intersected by the contact interface \(\Gamma ^c_h\), \({\hat{{ \underline{u} }}}_h^1\) and \({\hat{{ \underline{u} }}}_h^2\) may take different values at a single point, which results in a nonzero displacement jump between \({\hat{u}}_h^2\) and \({\hat{u}}_h^1\).
Next, we define the space of extended interface fields of polynomial order 1 as
This is another important aspect of the CutFEM approach. Fields on manifolds such as embedded interfaces are represented as the trace of a finite element field defined in \({\mathbb {R}}^d\). In this paper, this feature will be used to represent the Lagrange multipliers required to enforce contact conditions in a primal/dual setting.
Primal/dual CutFEM solver
We now describe our P1/P1 LaTIn/CutFEM primal/dual algorithms, first introduced in [12]. We seek the displacement field in the linear CutFEM space \(\hat{{\mathcal {U}}}_h\) and the Lagrange multipliers, which correspond to the normal reaction forces from contact, in the spaces of linear interface fields \( \hat{{\mathcal {W}}}_h\). The presentation formalism that we have chosen for this paper allows us to highlight its relationship with the work presented in [7, 8] in the specific case of frictionless unilateral contact (our algorithm extends straightforwardly to more general interface conditions such as frictional contact or elastic damageable cohesive interfaces [25]).
Regularization of the bulk displacement
Let us define the extended and stabilized bilinear form over the product of fictitious domain fields as
where for \(i \in \{ 1,2\}\)
and the socalled ghostpenalty terms are defined by
Here, \(\llbracket {x \cdot {{ \underline{n} }_F} }\rrbracket \) denotes the normal jump of the quantity x over the face, F, defined as \(\llbracket {x \cdot {{ \underline{n} }_F} } = \left. x \right _{K}{{ \underline{n} }_F}  \left. x \right _{K'}{{ \underline{n} }_F}\rrbracket \), where \({{ \underline{n} }}_F\) denotes a unit normal to the facet F with fixed but arbitrary orientation and \(\alpha >0\) is the ghost penalty parameter.
Regularization of the Lagrange multiplier field
The discretized conditions for optimality are the following: Find \(({\hat{u}}_h,{\hat{\lambda }}_h) \in \hat{{\mathcal {U}}}_h \times \hat{{\mathcal {W}}}_h\) satisfying the Dirichlet boundary conditions such that \(\forall ( \delta {\hat{u}}_h, \delta {\hat{\lambda }}_h ) \in \hat{{\mathcal {U}}}_h \times \hat{{\mathcal {W}}}_h\), where \(\delta {\hat{u}}_h\) satisfies the homogeneous boundary conditions (and there is no positivity requirement for the Lagrange multipliers),
where \(s_\heartsuit \) is a regularization term that penalizes the jump of Lagrange multipliers across interface element edges, as introduced in [8]:
Here, \(( u,v )_{\Gamma _h^c}= \int _{\Gamma _h^c} u v \, d \Gamma \). And \(\epsilon _\heartsuit \) is either 0 or 1. The toggle boolean \(\epsilon _\heartsuit \) is introduced for comparison to other references. When \(\epsilon _\heartsuit =1\), the previous problem is equivalent to the extremization of the saddletype functional
with respect to both of its arguments. However, in this contribution, we choose \(\epsilon _\heartsuit =0\). This is to make the present formulation consistent with the developments proposed in [12], were the CutFEMLaTIn algorithm was introduced in a completely different manner. We do not claim that this choice is better than the alternative.
Domain decomposition formulation
Whilst the previous “jump” formulation (26) couples the CutFEM subdomains, the LaTIn algorithm introduced in [12] implements a nonlinear domain decomposition approach. In this LaTInCutFEM algorithm the bulk problem is first solved in each subdomain independently and then interface fluxes and displacements are computed at the contact interface \(\Gamma _h^c\). We iterate between the bulk problems (linear stage) for \(i=1,2\) and the interface problem (local stage) until we achieve convergence. In the following, we briefly outline the main steps of the algorithm, a more detailed description can be found in [12].
To reformulate the “jump” formulation (26) into a problem over each fictitious domain \({\hat{\Omega }}^i_h\), we realize that (26) can be decomposed into
where \(b_h\) decomposes into \(b_h^1\) and \(b_h^2\) as
Let k denote the iteration steps starting with \(k=0\). We then iterate between the linear stage and the local stage detailed below.
The linear stage In which we solve the bulk problems: Find the displacement \({\hat{{ \underline{u} }}}_h \in \hat{{\mathcal {U}}}_h\) such that for all \(i=1,2\) and for all \(\delta {\hat{u}}_h \in \hat{{\mathcal {U}}}_h\)
subject to the descent direction for the interface fluxes \({\hat{\lambda }}^{i.k+1}_h \in \hat{{\mathcal {W}}}_h\) and the interface displacements \( {\hat{w}}^{i,k+1} \in \hat{{\mathcal {W}}}_h\), \(i=1,2\),
where the quantities from the previous halfiterate are known from the local stage and the interface displacements are given by
The local stage In each quadrature point along the interface \(\Gamma _h^c\), find the interface fields \(\{ \hat{{\hat{w}}}^{i,k+\frac{1}{2}}_h , \hat{{\hat{\lambda }}}^{1,k+\frac{1}{2}}_h\} \), \(i=1,2\), that satisfy the contact condition
subject to search directions
for all \(i=1,2\) and to equality/definition
We regularize the interface quantities \(\{ \hat{{\hat{w}}}^{i,k+\frac{1}{2}}_h , \hat{{\hat{\lambda }}}^{1,k+\frac{1}{2}}_h\}\) by solving the linear interface problem: Find the filtered Lagrange multiplier \( {\hat{\lambda }}^{i,k+\frac{1}{2}}_h \in \hat{{\mathcal {W}}}_h\) for all \( \delta {\hat{\lambda }}_h \in \hat{{\mathcal {W}}}_h\) such that
for \(i=1\) or \(i=2\). The filtered interface jumps \({\hat{w}}^{i,k+\frac{1}{2}}\) for \(i=1\) and \(i=2\) can then be recovered locally using search direction (35), replacing the “double hat”—unfiltered—Lagrange multiplier by its regularized counterpart.
Digital pipeline from CTscans to CutFEM biomechanical simulations
In the following section, we will describe our pipeline from CTimages to CutFEM simulations. We will explain the pipeline on scans of the hip joint. Here, we utilise well known classical algorithms including the decomposition of cells into interior, exterior and intersected cells; and the computation of the signed distance field with a combination of a geometrical computation in the near field and fast marching in the far field and extend it from classically one signed distance function to multiple signed distance functions in the context of contact computations.
Image segmentation and surface triangulation of bone geometries
In the first step, we reconstruct a patientspecific hip joint geometry based on computed tomography (CT) images (see Fig. 2). The images are obtained from the cancer imaging archive (TCIA) [27] and transferred to the 3D slicer software for the hip joint segmentation [1, 18].
We use multiple landmarks in a few slices to define the femur, the pelvis, and the background regions. Using these landmarks, all the image pixels are labelled and the corresponding 3D volumes are calculated (Fig. 2). Further, we refine the bone models to eliminate rough surfaces, holes, and irrelevant connected tissues. We extract the surface of each bone as a triangulated surface mesh and export them as STLfiles. These STLfiles are used as the basis for the signed distance function calculation described in the following.
Proposed surface triangulation to levelset algorithm
The surface triangulations, read from the STLfiles, are first embedded in a regular background grid and then signed distance functions are computed on this background grid. The following steps describe this surface triangulation to level set function algorithm.
Step 1: Determine Bounding Box Mesh In the first step, we determine a bounding box domain, denoted by \(\Omega ^b\), which contains both the surface triangulation of the hip bone, denoted by \({\mathcal {S}}_h^1\), and the surface triangulation of the femur bone, denoted by \({\mathcal {S}}_h^2\). This bounding box domain, \(\Omega ^b\), is defined by the minimum and maximum coordinates, \(\pmb {x}_{min}= (x_{min},y_{min},z_{min})\) and \(\pmb {x}_{max}=(x_{max},y_{max},z_{max})\), of all vertex coordinates in both surface triangulations \({\mathcal {S}}_h^1\) and \({\mathcal {S}}_h^2\). The resulting domain is given by \(\Omega ^b=[x_{min}, x_{max}] \times [y_{min}, y_{max}] \times [z_{min}, z_{max}]\). We then generate a uniform and regular tetrahedral mesh, denoted by \({\mathcal {T}}_h^b\), on the background box domain \(\Omega ^b\).
Step 2: Collision detection between the surface triangulations and the background mesh In the second step, we detect all collisions between the surface triangles and the background mesh tetrahedra. A collision between a surface triangle and a background mesh tetrahedron (cell) means that the surface triangle and the tetrahedron have a nonempty intersection. An efficient way to detect the collision between each surface triangle and the background mesh cells is to use Axis Aligned Bounding Box (AABB) trees. In this method, instead of testing each background mesh cell for an intersection with each surface triangle \(F_j \in {\mathcal {S}}_h^i\), we detect intersections between two hierarchical structures of bounding boxes (AABB trees, for more details see [16, 24, 29]). This collision detection between an AABB tree for the surface mesh, \({\mathcal {S}}_h^i\), \(i=1,2\), and the AABB tree for the background mesh, \({\mathcal {T}}_h^b\) yields a set of tetrahedra that have a bounding box which collides with the bounding box around the surface triangle \(F_j \in {\mathcal {S}}_h^i\). We then test among this set of tetrahedra with colliding bounding boxes if the surface triangle indeed intersects with the tetrahedra using geometric predicates (triangle–tetrahedra intersection [16]).
We express the results of this collision detection in a map between the index of each surface triangle, j for surface triangle \(F_j \in {\mathcal {S}}_h^i\), \(i=1,2\), and the indices of the background mesh tetrahedra, \(k_m\), it intersects (collides) with. Formally, these maps, one for each surface triangulation, are defined as
where \(i=1,2\), j denotes the index of the surface triangle \(F_j \in {\mathcal {S}}_h^i\) and runs from 1 to the number of surface triangles in \({\mathcal {S}}_h^i\) and \(k_m\), \(m = 1, \dots , N_j\), denotes the index of the background mesh tetrahedra that the surface triangle j intersects with. Note that, one surface triangle in general intersects several tetrahedra in the background mesh. The number of tetrahedra intersected by surface triangle with index j is denoted by \(N_j\).
In addition to these mappings, we also construct the inverse map from the index of the background mesh tetrahedra to the index of the surface triangles that intersect the tetrahedra defined as
Here, k denotes the index of the background mesh tetrahedron, and \(j_m\), \(m=1,\dots , N_k\) denotes the index of the surface mesh triangle \(F_{j_m} \in {\mathcal {S}}_h^i\) that has a nonempty intersection with the background tetrahedron k and \(N_k\) is the number of surface triangles which intersect the background mesh tetrahedron with index k. These two maps are crucial for the next steps.
Step 3: Decompose cells in the background mesh into interior cells, intersected cells and exterior cells We give each background mesh tetrahedron, a set of two marker values, \(M = (M_1,M_2)\), \(M_1 \in \{ 0,1,2\}\), \(M_2\in \{ 0,1,2\}\) depending on if the tetrahedron is inside, intersected or outside by femur or hip bone surface triangulations, \({\mathcal {S}}_h^i\), \(i=1,2\). We start with all cells marked as \(M=(0,0)\) and then mark the tetrahedra intersected by \({\mathcal {S}}_h^i\) using the inverse mapping \(\chi ^{i,1}\). From the mapping, we determine the indices of all background mesh tetrahedra that have a nonzero intersection with the surface triangulation \({\mathcal {S}}_h^i\), \(i=1,2\) and mark these cells with 1 (\(M_1=1\) if tetrahedron intersects with cells in \({\mathcal {S}}_h^1\) and \(M_2=1\) if tetrahedron intersects with cells in \({\mathcal {S}}_h^2\)). The last step in the mesh marking is to determine all cells outside of the femur bone surface triangulation (\(M_1=2\)) and outside of the hip bone surface triangulation (\(M_2=2\)). To achieve this, we assume that the background mesh cells are clearly separated by a band of intersected background mesh elements into mesh cells inside the femur or hip bone and into mesh cells outside. Starting with a tetrahedron in a corner of the background mesh, for which we are sure that the cell is outside of both the hip and femur bone geometries, we mark this corner cell with \(M = (2,2)\). We then find all cells connected to this starting tetrahedron and see if any of the connected tetrahedron is marked with \(M_i=1\). If the neighbouring cell is marked with \(M_i=1\), we leave the mark at \(M_i=1\) if it is marked with \(M_i=0\), we set \(M_i=2\) and find all cells connected to the new cell we just marked with \(M_i=2\). This way, we find all cells that are connected to the outside corner cell without crossing the band of elements which are marked as intersected. For more details on a mesh marking method for connected domains see [13]. Note that the presented technique is similar to the classical coloring technique employed in Delaunay mesh generation [9]. Alternatively to exploiting connectivities of inside and outside domains, ray casting techniques can be used in which the number of intersections of a ray originating in a background mesh cell with the surface mesh can determine if a cell is inside or outside (see e.g. [3]). The decomposition into intersected, interior and exterior cells will be crucial in the next steps.
Step 4: Extract ficititious domain In this step, we disregard the cells outside the femur and hip bone geometries. The cells outside of both the femur and the hip bone have no physical meaning and are not of interest in our simulations. We disregard the outside cells by constructing a new background mesh, \({\mathcal {T}}_h^* \subset {\mathcal {T}}_h^b\), with all cells from the original background mesh which are marked with \(M_1=0\) or \(M_2 = 0\) and \(M_1=1\) or \(M_2=1\). In extracting the cells, we keep track of the mapping of the cell indices of this new “submesh” to the original background mesh to update the mappings \(\chi ^{i,1}\) (to map from the surface triangle index to the new indices of the fictitious domain background mesh, \({\mathcal {T}}_h^*\)). We also keep the mesh cell marker. The new background mesh, \({\mathcal {T}}_h^*\), forms a “Cube”like mesh which fully contains the hip joint geometry (as shown in Fig. 3).
Step 5: Compute the signed distance functions to the femur and to the hip bone surface meshes. In this step, we determine two signed distance functions: one to the hip surface mesh, denoted by \(\phi ^1_h\) , and one to the femur bone surface mesh denoted by \(\phi ^2_h\). We compute the distance to surface mesh \({\mathcal {S}}_h^i\) in the following way

1.
Initialization phase. In the first step, we use a geometry based approach to compute the unsigned distance of all nodes of the intersected tetrahedra to the surface mesh, as described in the following. Using the mapping \(\chi ^{i,1}\), we iterate over all cells that are intersected by the surface mesh \({\mathcal {S}}_h^i\). For each intersected cell, we iterate over its vertex nodes with coordinates \(\mathbf {v_p^j}\), \(j=1,2,3,4\). We compute the distance from each vertex node to all surface triangles, which the current element intersects (given by map \(\chi ^{i,1}\)). This is done by calculating the unsigned distance of a point (\(\mathbf {v_p^j}\)) to a triangle [16]. Among all distances determined for a vertex node, we keep the shortest distance and save the value in the degree of freedom of the level set finite element function corresponding to the vertex node given by a vertex to dof mapping (for a piecewise linear level set function). Note here, that vertex nodes are often iterated over several times (when they belong to several intersected tetrahedra) and we always keep the shortest unsigned distance value of all computed distances.

2.
After initialising all values of the level set at the nodes of all intersected tetrahedra (previous step), we use a fast marching method [32] to compute the approximate unsigned distance of all remaining nodes in the background mesh. In a first step, we iterate over all nodes with known distance value and determine all nodes connected to these nodes via an edge (neighbouring node). We save these connected (neighbouring) nodes in a set called active set. We then compute the distance of these nodes in the active set to the known values, as follows. For each node in the active set, we iterate over all tetrahedra that share this node. For each tetrahedron, we compute the distance of the active node to known values in the tetrahedron (if any). There are four possible cases, firstly, the tetrahedron contains three known values [point (active node)—triangle distance], secondly, the tetrahedron contains two known values [point (active node)—edge (between known nodes) distance]; thirdly, the tetrahedron contains one known value [point (active node)—point (known node) distance] or the tetrahedron contains no known values (no distance computation). Once all nodes in the active set have their distance values (smallest of all computed distances), we determine the next set of nodes connected to all known nodes. We repeat this process until all unsigned distance values in all nodes have been computed. For more details on the employed fast marching method see [21].

3.
In a last step, we convert the unsigned distance function computed in the previous steps to a signed distance function using the cell marker computed in step 3. In step 3, we have marked all tetrahedra enclosed by the surface mesh with 0, all cells outside with 2 and all intersected cells with 1. We want negative level set values inside our bone geometries and a positive sign outside such that the zero contour of the level set function represents a linear approximation of the surface mesh (see Fig. 4) and the normalised gradient of the level set represents a normal pointing from inside the bone to the outside. To achieve this, all that is needed is to assign negative signs to all values of the level set in nodes for cells marked with 0.
Following steps 5.1 to 5.3, for each \({\mathcal {S}}_h^i\), \(i=1,2\), we obtain the piecewise linear signed distance functions \(\phi ^1_h\) and \(\phi ^2_h\), which provide a linear approximation of the femur and hip bone geometries, respectively. In the next section, we will use these signed distance functions to compute stress distributions in the bones.
Adaptation of the background mesh to guarantee sufficient geometric resolution
As described in the previous section, we compute a piecewise linear signed distance function to represent the surface triangulation geometry. This piecewise linear representation is close to the surface geometry in planar areas. However, in areas of high curvature or in areas with fine features, we need to ensure that the background mesh size is fine enough to capture these features with our piecewise linear approximation. Here, the finest mesh size is dictated by the thinnest parts of the geometry. In the piecewise linear approximation, the thinnest parts needs to be resolved by at least two cells.
In this section, we will first make some heuristic observations about the quality of the level set approximation for the hip and femur bone geometry for a refinement of all background mesh cells which are colliding with the surface triangulations of the two bones. Secondly, we will use these heuristic observations to suggest two refinement criteria to automatically choose only some of the colliding elements to be refined and to generate a level set approximation that captures the geometry with sufficient accuracy.
Firstly, Fig. 5a, b show the effect of choosing a background mesh, which is too coarse to represent the fine features in the hip bone geometry. The piecewise linear level set approximation cannot capture the fine bone structures in the upper hip bone and in the acetabulum and instead these areas are represented as holes in the geometry. Refining the mesh in the elements, which are colliding with both surface triangulation (as identified by the markers in the algorithm introduced in the previous section) until all features are sufficiently represented, can solve this problem. Figure 5, shows the zero contour line of the level set function for two such background mesh refinement steps (a–c) together with their background mesh (e–g) and the original surface triangulation (d, h). We can see in Fig. 5 that the level set approximation leaves large holes in the hip bone for the initial background mesh. After the first refinement step, we still observe some small holes in the thin part of the upper hip bone which is finally completely closed in the second refinement step.
The appearance of these holes in the hip bone geometry are due to sections of the hip bone geometry with no connectivity to interior cells as displayed in Fig. 6. Figure 6a–c shows a slice (shown in Fig. 6g) through the hip bone through a part of the bone with the holes. Elements which are colliding with the surface triangulation of the hip bone or the femur bone are displayed in green, elements which are outside the hip bone are shown in red and elements fully inside the hip bone are shown in blue. The zero level set contour line is shown in yellow and the surface triangulation is shown by spheres in the nodes of the surface triangulation. We observe that in the thin part of the hip bone, elements are detected as colliding but there is no element which is fully inside the hip bone geometry (see Fig. 6d–f). The computed distance function to the surface triangulation only gets assigned a negative sign for all nodes of elements that are fully inside. As the thin section does not contain any elements that are fully inside, the level set function does not change sign and hence there is no zero contour intersecting these elements. This yields the holes. By refining all background elements which are colliding with the surface triangulations, we can close these holes and improve the geometrical approximation. We refine until the zero level set contour is close to the original surface geometry. Here, two refinements were found to be sufficient.
As the femur bone does not contain so many small features, it is much easier approximated by a piecewise linear level set approximation in a coarse background mesh (see Fig. 7). Figure 7 displays the linear approximation of the zero contour line of the level set function in yellow and the nodes of the surface triangulation as spheres in a slice through the femur bone. We observe that the zero contour line approximates the femur shape in the slice very well. Only for the coarsest mesh, we can observe a clear difference between the femur bone and the surface triangulation in areas of high curvature.
Hence, we heuristically observed two major challenges in the resolution of the surface triangulation: the appearance of holes and the approximation of areas with high curvature. We develop two refinement criteria to address these two challenges. One to identify cells, in which the level set approximation does not change its sign even though the surface triangulation collides with the element (e.g. holes) and one to identify cells in which the zero contour line of the level set is far away from the surface triangulation (e.g. high curvature).
The first refinement criterion is to mark all cells for refinement which are colliding with the surface triangulation but in which the computed levelset function does not change sign. We refer to this refinement criterion as “adaptive sign criterion”.
For the second refinement criterion, we determine for each background mesh cell which collides with at least one surface triangulation what the maximum absolute level set value is at all nodes of all surface triangles which collide with the background cell. For example, let us assume background mesh cell j collides with surface triangle k, l and m. We then evaluate the level set function value in each node of the surface triangles k, l and m, which will give us the distance of each node of the surface triangles to the zero level set contour. We take the maximum absolute levelset/distance value of all these node values and store this value for cell j, denoted by \(d_{max}^j\). We repeat this process for all cells that collide with surface triangles. We then determine the global maximum value among all these cells, denoted by \(d_{max}^{global}\),
where j is a list of cell indices of cells that collide with the surface triangulation. We mark all cells for refinement which have a maximum distance value larger than a threshold \(\epsilon \) times the global maximum, with \(0<\epsilon <1\), i.e.
for each cell j which collides with the surface triangulation. We refer to the second refinement criterion as “adaptive greedy criterion”.
To evaluate the quality of the level set approximation with respect to the surface triangulation, we calculate the average distance value, \(d_{avg}\), over all nodes of the surface triangulation, i.e.
where \(\pmb {x}_k\) denotes the coordinate of a node in the surface triangulation, \(k=[0,1,\ldots ,N_k]\) is the node index, and \(N_k\) is the total number of nodes in the surface triangulation.
Figure 8 shows the average distance value of the level set function to the surface triangulation versus the number of cells for the different adaptive refinement criteria. We compare the refinement of all cells that are colliding with the interface (dashed, red) with the sign refinement criterion (dashed, cyan), the greedy refinement criterion with a parameter of \(\epsilon =0.5\) and the combination of both sign and greedy criteria. Figure 8a shows the average distance of the level set function to the surface triangulation for the femur bone and Fig. 8b shows the average distance for the hip bone. For each bone the refinement criteria have very different effects. For the smooth femur bone, the sign criterion, which will refine in the area of holes or very high curvature does not refine as efficiently as greedy or uniform refinement with respect to the average distance. For the hip bone on the other hand, the sign criterion reduces the average distance between the level set function and the surface triangulation most efficiently. This is what we have heuristically observed for the hip bone before. The hip bone has a lot of fine feature and areas of high curvature and this is efficiently resolved by the sign criterion. The greedy criterion performs best for the femur bone and yields lower average distances than uniform refinement for the same number of elements. The greedy algorithm for the hip bone performs slightly less well for the hip bone than the sign criterion up until 70000 cells by which the greedy criterion yields better refinement than the sign criterion. The combination of both sign and greedy criterion yields the lowest average distance to the surface triangulation for the hip bone geometry.
Unilateral contact between hip and femur bone
In this section, we use our LaTIn/CutFEM algorithm described in section to compute the stress distribution in the hip and the femur bone in unilateral contact. All of our software developments are based on FEniCS [2]. In [12], we have shown that our LaTIn/CutFEM algorithm was stable and converged optimally for a range of carefully selected benchmark problems. This contribution focuses on showing that our contact algorithm performs well for geometries obtained from images. In this section, we use the level set geometry description for the femur and the hip bone obtained from the algorithm described in the previous section and compute a unilateral contact problem on the bones.
We bring the femur and the hip bone into contact by moving the femur bone geometry slightly into the acetabulum to create a clear contact interface in between the bones. The resulting contact interface is displayed in Fig. 9. Here, we need to predefine the contact interface as our algorithm currently does not include a contact detection algorithm and our algorithm is only valid for small deformations. The extension of the algorithm to include large deformations and contact detection will be addressed in a future publication but is out of the scope of this contribution.
Figure 10 shows the fictitious domain mesh used for the computation, which consists of 460.985 cells yielding 288.426 number of degrees of freedom for the P1 displacement. As boundary conditions, we apply homogenous Dirichlet boundary conditions for the displacement on the bottom of the femur bone and we apply a displacement of \({\mathbf {u}}=(0,0,1)^T\) on the boundary facets marked in red in Fig. 10 on top of the hip bone. We set the Young’s moduli for the hip and femur bone to \(E^1=E^2=1\), and the penalty parameters to \(\alpha =\beta =0.1\), \(\gamma =1\).
Figure 11 shows the stress component \(\sigma _{zz}\) for the hip and femur bone. We observe that under our artificial boundary conditions, the thin part of the hip bone is under compression and the rest of the hip bone is under tension. The femur bone is mainly under tension except for a small area in the femoral neck and in the contact region. Looking at a slice through the hip joint as shown in Fig. 12 contact forces transmit the compression force to the top of the femoral head. Figure 12b shows the deformation from the slice in the reference configuration to the slice in the deformed configuration. We observe that the femur bone deforms very little while the hip bone shows a much larger deformation. The magnitude of the deformation is shown in Fig. 13, like seen in the slice the femur bone deforms very little while the hip bone deforms significantly away from the area of contact between the femur and hip bone. The femur bone stabilises the hip bone in the contact region but the hip bone is free to move away from the contact interface. Figure 14 shows the impact of the penalty parameter \(\beta \), which is used to stabilise the Lagrange multiplier, on the contact pressure. We clearly see that for \(\beta =0\) the contact pressure is strongly oscillating. In [12], we showed that these oscillations grow unbounded with increasing number of LaTIn iterations. For \(\beta >0\), we observe a smooth contact pressure. For very large \(\beta \) (\(\beta =10\)), gradients on the contact pressure are over penalised, which yields an overly stiff interface field that moves like a block.
Conclusions
In this article, we have first presented a reformulation of our LaTIn/CutFEM algorithm in terms of an augmented Lagrangian approach for two elastic bodies. We have then detailed our robust algorithm to compute level set representations on finite element meshes for multiple bodies from segmented images. We have discussed in detail on how to adapt the finite element mesh to guarantee a good approximation of the level set function to the surface triangulation. We have developed and analysed two refinement criteria for adaptive mesh refinement. Finally, we have concluded our article with numerical results of a unilateral contact computation between a femur and a hip bone.
Our contact algorithm is currently limited to small deformation. In a future contribution, we are planning to extend our approach to large deformation.
Data Availability Statement
The CTscan of the hip joint used in this article was obtained from the cancer imaging archive (TCIA) [27] and can be downloaded at https://www.cancerimagingarchive.net/.
References
3D Slicer. https://www.slicer.org/, 2019.
Alnæs Martin, Blechta Jan, Hake Johan, Johansson August, Kehlet Benjamin, Logg Anders, Richardson Chris, Ring Johannes, Rognes Marie E, Wells Garth N. The fenics project version 1.5. Archive of Numerical Software. 2015;3(100):1.
Baumgärtner D, Wolf J, Rossi R, Dadvand P, Wüchner R. A robust algorithm for implicit description of immersed geometries within a background mesh. Advanced Modeling and Simulation in Engineering Sciences. 2018;5(1):21.
Bui HP, Tomar S, Bordas SPA. Corotational cut finite element method for realtime surgical simulation: Application to needle insertion simulation. Computer methods in applied mechanics and engineering. 2019;345:183–211.
Burman E. Ghost penalty. Comptes Rendus Mathematique. 2010;348(21–22):1217–20.
Burman E, Claus S, Hansbo P, Larson MG, Massing A. CutFEM: Discretizing geometry and partial differential equations. International Journal for Numerical Methods in Engineering. 2015;104(7):472–501.
Burman E, Hansbo P. Deriving robust unfitted finite element methods from augmented Lagrangian formulations. In: Geometrically Unfitted Finite Element Methods and Applications, Springer; 2018.
Burman E, Hansbo P, Larson MG. Augmented Lagrangian finite element methods for contact problems. ESAIM: Mathematical Modelling and Numerical Analysis. 2019;53(1):173–95.
Cheng SW, Dey TK, Shewchuk J. Delaunay Mesh Generation. : Chapman and Hall/CRC; 2016.
Chouly Franz, Hild Patrick. A Nitschebased method for unilateral contact problems: Numerical analysis. SIAM Journal on Numerical Analysis. 2013;51(2):1295–307.
Claus S, Bigot S, Kerfriden P. CutFEM Method for StefanSignorini Problems with Application in Pulsed Laser Ablation. SIAM Journal on Scientific Computing. 2018;40(5):B1444–69.
Claus S, Kerfriden P. A stable and optimally convergent LaTInCutFEM algorithm for multiple unilateral contact problems. International Journal for Numerical Methods in Engineering. 2018;113(6):938–66.
Dapogny Charles, Frey Pascal. Computation of the signed distance function to a discrete contour on adapted triangulation. Calcolo. 2012;49(3):193–219.
Duprez M, Bordas SPA, Bucki M, Bui HP, Chouly F, Lleras C, Lobos V, Lozinski A, Rohan PY, Tomar S. Quantifying discretization errors for soft tissue simulation in computer assisted surgery: A preliminary study. Applied Mathematical Modelling. 2020;77:709–23.
Duprez M, Lleras V, Lozinski A. Finite element method with local damage of the mesh. ESAIM: Mathematical Modelling and Numerical Analysis. 2019;53(6):1871–91.
Ericson Christer. RealTime Collision Detection. : CRC Press; 2004.
Fabre M, Pousin J, Renard Y. A fictitious domain method for frictionless contact problems in elasticity using nitsche’s method. The SMAI journal of computational mathematics. 2016;2:19–50.
Fedorov Andriy, Beichel Reinhard, KalpathyCramer Jayashree, Finet Julien, FillionRobin JeanChristophe, Pujol Sonia, Bauer Christian, Jennings Dominique, Fennessy Fiona, Sonka Milan, Buatti John, Aylward Stephen, Miller James V, Pieper Steve, Kikinis Ron. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magnetic resonance imaging. 2012;30(9):1323–41.
Fries ThomasPeter, Belytschko Ted. The intrinsic XFEM: A method for arbitrary discontinuities without additional unknowns. International journal for numerical methods in engineering. 2006;68(13):1358–85.
Giovannelli Luca, Ródenas JJ, NavarroJiménez JM, Tur M. Direct medical imagebased Finite Element modelling for patientspecific simulation of future implants. Finite Elements in Analysis and Design. 2017;136:37–57.
Groß S, Reusken A. Numerical Methods for TwoPhase Incompressible Flows. Berlin: Springer; 2011.
Hansbo Anita, Hansbo Peter. A finite element method for the simulation of strong and weak discontinuities in solid mechanics. Computer Methods in Applied Mechanics and Engineering. 2004;193(33–35):3523–40.
Haslinger J, Renard Y. A new fictitious domain approach inspired by the extended finite element method. SIAM Journal on Numerical Analysis. 2009;47(2):1474–99.
Johansson August, Kehlet Benjamin, Larson Mats G, Logg Anders. Multimesh finite element methods: Solving PDEs on multiple intersecting meshes. Computer Methods in Applied Mechanics and Engineering. 2019;343:672–89.
Kerfriden P, Allix O, Gosselet P. A threescale domain decomposition method for the 3D analysis of debonding in laminates. Computational Mechanics. 2009;44(3):343–62.
Keyak JH, Meagher JM, Skinner HB, Mote CD Jr. Automated threedimensional finite element modelling of bone: A new method. Journal of biomedical engineering. 1990;12(5):389–97.
Kirk S, Lee Y, Lucchesi FR, Aredes ND, Gruszauskas N, Catto J, Lemmerman J. Radiology Data from The Cancer Genome Atlas Urothelial Bladder Carcinoma [TCGABLCA] collection; 2016.
Legrain G, Chevaugeon N, Dréau K. High order XFEM and levelsets for complex microstructures: Uncoupling geometry and approximation. Computer Methods in Applied Mechanics and Engineering. 2012;241:172–89.
Massing André, Larson Mats G, Logg Anders. Efficient implementation of finite element methods on nonmatching and overlapping meshes in three dimensions. SIAM Journal on Scientific Computing. 2013;35(1):C23–47.
Moës N. Extended finite element method for cohesive crack growth. Engineering Fracture Mechanics. 2002;69:813–33.
Ruess Martin, Tal David, Trabelsi Nir, Yosibash Zohar, Rank Ernst. The finite cell method for bone simulations: Verification and validation. Biomechanics and Modeling in Mechanobiology. 2012;11(3):425–37.
Sethian James A. Fast marching methods. SIAM review. 1999;41(2):199–235.
Viceconti Marco, Bellingeri Luisella, Cristofolini Luca, Toni Aldo. A comparative study on different methods of automatic mesh generation of human femurs. Medical engineering & physics. 1998;20(1):1–10.
Acknowledgements
Not applicable.
Funding
The authors gratefully acknowledge the financial support provided by the European Union’s Horizon 2020 research and innovation programme under the Marie SklodowskaCurie Grant Agreement No. 764644 (ITN RAINBOW) and the funding received by the Alfred Benzon Fond. The publishing fees of this article have been graciously covered by the French Association for Computational Mechanics (CSMA).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this article.
Corresponding author
Ethics declarations
Competing interests
The authors declare that there are no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Claus, S., Kerfriden, P., Moshfeghifar, F. et al. Contact modeling from images using cut finite element solvers. Adv. Model. and Simul. in Eng. Sci. 8, 13 (2021). https://doi.org/10.1186/s40323021001972
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40323021001972