We have pointed out above the importance of having the mesh vertices pre-aligned to ensure a good recombination rate. It is the purpose of the vertex placement algorithm to achieve this. This algorithm, however, relies on a number of data structures and geometrical concepts that are first introduced and developed below.
First, the vertex placement algorithm needs to know, at each point of the domain, the prescribed local mesh size and the local preferred mesh directions. In practice, those geometrical quantities are conjointly obtained by evaluating a specific field structure called cross field. The generation of direction fields was extensively studied in [25].
Secondly, the notion of distance itself represents another degree of freedom of the method. We shall show that it is particularly appropriate when dealing with hex-meshing, to compute distances with the infinity norm, instead of the standard Euclidean norm.
Finally, the algorithm is characterized by a large number of spatial searches, in order to check whether or not a prospective vertices is too close to any already existing vertex. To optimize the efficiency of this operation, an R-tree data structure is employed [26, 27].
Cross fields
At each point of a region , the frame field (d1,d2,d3) represents the three local orthogonal preferred directions of the hexahedral mesh. Frame fields are usually required to satisfy many constraints [16, 28]. On the geometrical edges of Ω, one of the three directions should be tangent to the edge itself [9]. On the surfaces of Ω, one of the three directions should be perpendicular to the surface [9, 16]. A last requirement is that the frame field should be as smooth as possible.
On the other hand, at each point of Ω, the size field represents the prescribed local mesh size value. Mesh sizes h1,h2,h3 are defined for every point of the volume in each of the directions d1,d2,d3. In this paper, the mesh size field at a point x is isotropic, i.e. h(x)=h1(x)=h2(x)=h3(x). The extension to anisotropic meshing will be done in a forthcoming work.
The user fixes the mesh size at the geometrical vertices of the model. One-dimensional size fields are then computed along the geometrical edges. Because the surfaces are bounded by geometrical edges, Dirichlet conditions can be imposed on the surfaces boundaries. A Laplace equation is used to obtain the size field over the surfaces. The size field over the volume is calculated in a similar manner. Continuous finite elements of the first order are employed in each case. The final size field is therefore a three-dimensional piecewise continuous field. The Laplace equation was chosen because it leads to smooth and gradual solutions.
The cross field (h1d1,h2d2,h3d3), now, combines both information into a single field. At each vertex of the mesh, the cross field evaluates into a symmetric real 3 by 3 tensor whose columns are the three orthogonal vectors parallel to the local preferred directions of the hexahedral mesh. Moreover, the norm of the vectors represent the local mesh size; the three norms are identical in case of an isotropic mesh (which is the case considered in this paper), but they may differ in case of an anisotropic mesh.
The construction of a frame field on a region Ω belongs to the category of elliptic problems. Boundary conditions must be imposed on the boundary ∂ Ω. We thus proceed logically by explaining first how the frame field is constructed on surfaces, and deal afterwards with the prolongation into the volumes.
Let
(1)
be a smooth parametrization of the surface
(see [29–31] for a review of parametrization techniques for surface remeshing). It should be noted that the parametrization does not need to be conformal, i.e the angles do not need to be conserved, for the algorithms presented in this paper. (This is a nice feature because guaranteed one-to-one conformal maps are more difficult to compute than bijective harmonic mapping.) For example, Figure 3 shows a harmonic parametrization of an arbitrary surface
onto a unit disk.
Consider the two tangent vectors
which are the images in
of the basis vectors t′1=(1,0) and t′2=(0,1) of the parameter plane
. Because they are not parallel for any point of
, one can build the unit normal vector n=t1×t2/∥t1×t2∥. Each vector t tangent to
can be expressed as t=u t1+v t2 with (u,v) the covariant coordinates of t. The tangent vector t is thus the image of a vector t′=(u,v) in the parameter plane. It is easy to compute covariant coordinates of any tangent vector t using the metric tensor of the parametrization. By definition, t=u t1+v t2. Then, t·t1=u t1·t1+v t2·t1 and t·t2=u t1·t2+v t2·t2, which reads in matrix form
(2)
where
is the metric tensor, invertible for any smooth parametrization.
For defining our frame field, a local orthonormal frame (s1,s2,n) is first constructed at all points x of
with s1=t1/∥t1∥, s2=n×t1/∥n×t1∥. Next, the direction d1 of the frame field is computed at the points x
b
of the boundaries of surface
: d1 is the tangent vector to the boundary. The local orientation of the frame field at the boundary can then be computed as the oriented angle between s1 and d1. Then, an elliptic boundary value problem is used to propagate the complex number z(u)=a(u)+i b(u)=e4iθ(u) in the parametric domain. More specifically, two Laplace equations with Dirichlet boundary conditions are solved in the parametric space
in order to compute the real part a(u)= cos4θ and the imaginary part b(u)= sin4θ of z:
(3)
After solving those two PDEs, the frame field can be represented in the whole domain by the angle
The choice 4θ as the argument of z is motivated by symmetry arguments: frame fields are equivalent when they are rotated around n by any angle that is a multiple of π/2. Details of that procedure are given in [32]. Finally, the frame field (d1,d2,d3) can be computed on the whole surface
as follows:
(4)
where θ is the solution of the elliptic boundary value problem (3).
As an example, Figure 4 presents the frame field computed on surfaces of a mechanical part. Figure 5 shows two triangular meshes of different coarseness and their resulting frame fields. Linear interpolation of the a and b components discussed earlier was used in order to obtain the same number of frames regardless of the mesh density. As seen from the figure, the frame field (a) is not entirely radial and contains defects because the mesh (a) is too coarse.
The frame field at any point inside the volume is then chosen to be equal to the frame field at the closest surface vertex [24]. (ANN nearest neighbor library is employed for the queries [33].)
These frame fields are not going to be smooth whenever the distance function to the walls is not itself smooth. Recently, two methods capable of generating smooth frame fields have been developed [16, 17]. Both of these methods employ LBFGS optimization to minimize energy functionals.
Measuring distances
For inserting a new mesh vertex in our frontal algorithm, the distance between a prospective vertex x
i
and any already existing vertex x must be smaller than kh, where h is the local mesh size and k a free parameter of the algorithm ranging from 0 to 1. Parameter k absolutely needs to be inferior to one. If not, too many valid vertices will be missing from the cloud. In the implementation described in this work, k is equal to 0.7.
The way distances between vertices are calculated is however a degree of freedom of the method. When dealing with hex-meshing, it turns out to be advantageous to compute distances in the infinity norm, instead of in the Euclidean norm:
(5)
(6)
In the infinity norm, the unit sphere is actually a cube, which reduces to a square in two dimensions (see Figure 6). The exclusion area around each prospective vertex is therefore a cube, resp. a square, which precisely matches the shape of the elements one wishes to build.
Contrary to the Euclidean norm, the infinity norm is not isotropic and, consequently, it has an orientation which is given by the frame field. In the parameter plane, due to the change of coordinates (1), the exclusion area is the parallelogram determined by
(7)
where M
x
is the Jacobian matrix of (1), evaluated at x.
The infinity distance is not a differentiable function [24]. However, this is not an issue, because the frontal algorithm does not require the computation of distance derivatives.
Using the infinity distance instead of the Euclidean distance can increase the hexahedra percentage. The quarter cylinder illustrated on Figure 7 provides an example where an improvement by 5% of the ratio of hexahedra is observed, by simply using the L
∞
norm instead of the L2 norm in the R-tree spatial search algorithm described in the next section.
Using R-trees for spatial searches
As said before, a prospective vertex is effectively created only if there is enough unoccupied space around it. The size of this exclusion area or volume depends on the local mesh size. According to the dimension and the chosen norm, the shape of the exclusion region can be a parallelogram or an ellipse (in 2D), and a cube or a sphere (in 3D).
The computation of the distance between the prospective vertex and all the other vertices would have a quadratic complexity in time and would therefore be prohibitive in terms of computation time. The number of computations required to ensure the exclusions can however be considerably decreased if the exclusion cube of each vertex is enclosed in a bounding box whose edges are parallel to the coordinate axis. An R-tree data structure [26, 27] can efficiently determine bounding boxes intersections and, then, it is enough to compute the distance between pairs of vertices whose boxes intersect each other.
We now illustrate with a 2D example of a planar surface how to decide wether a prospective vertex can be inserted or not. For this example, we have chosen the infinity norm for computing distances. In Figure 8(a), x1 is the prospective vertex and x is an existing mesh vertex. The dotted square around x1 is the oriented exclusion area of vertex x1, that is computed from the surface cross field (h1d1,h1d2) that has a uniform mesh size field h1. The solid box surrounding the prospective vertex is the bounding box of the exclusion area that is parallel to the xy-coordinate axis. This bounding box should always include the oriented exclusion square of side 2kh. This condition is satisfied in 2D if the box is of side and in 3D if the cube is of side . Even if the boxes intersect each other in Figure 8(a), the distance between x1 and x is sufficiently large. Thus, x1 can be inserted in the cloud and added to the queue.
Figure 8(b) shows the same two vertices. Again, the boxes intersect each other. This time, however, x1 is too close to x and x1 cannot be added to the cloud or to the queue.
It should be noted that on Figures 8(a) and 8(b), is not necessarily equal to . The local mesh sizes at x1 and x can also be different as illustrated in Figure 8(a). However, if x is outside the dotted square of x1, it is considered sufficient.
For non-planar surfaces, the surfaces need to be parametrized. As the parametrization is not necessarily conformal, i.e. the angles between d1 and d2 are not conserved, the dotted squares (exclusion area) of Figure 8 become parallelograms in the parametric space. As far as the bounding boxes are concerned, they are computed in the same manner and are then parallel to the uv-coordinate axis of the parametric space.
Let’s assume that on surfaces, each vertex attempts to create four vertices in the four cardinal directions. If the surface normal is not constant, these prospective vertices may not rest on the surface. The next section describes a scheme capable of solving this issue by intersecting surfaces with circle arcs.
Surface meshing: the packing of parallelograms algorithm
The quadrilateral mesh algorithm presented here is a simpler variant of [32] that we call packing of parallelograms. Consider one vertex located at point u=(u,v) of the parameter plane
which correspond to point x(u,v) in the 3D space (see Figure 9). The cross field at this point of the surface is (h1d1,h2d2,h
n
n), in terms of the three orthonormal preferred mesh directions, {d1,d2,n}, and the three corresponding mesh sizes, {h1,h2,h
n
},.
In a perfect quad mesh, each vertex is connected to four neighboring vertices forming a cross parallel to the cross field. In our approach, four prospective points x
i
, i=1,…,4 are constructed in the neighborhood of point x with the aim of generating the perfect situation.
Points x1 and x2 are constructed as the intersection of the surface
with a circle of radius h1, centered on x and situated in the plane Π of normal d2 (see Figure 9). Points x3 and x4 are constructed as the intersection of the surface
with a circle of radius h2, centered on x and situated in the plane of normal d1 (not in the figure for clarity).
Numerical difficulties associated with the surface-curve intersection are overcome by choosing a good initial guess for the intersection. If we approximate the surface by its tangent plane at x, point x1 is situated at x1=x+h1d1. A good initial guess in the parameter plane is u1=u+d u1 where d u1=(du1,dv1) is computed using (2) i.e.
This also gives u2=u−d u1, d u3=(du3,dv3)
u3=u+d u3 and u4=u−d u3.
The algorithm works as follows. Each vertex of the boundary is inserted in a fifo queue. Then, the vertex x at the head of the queue is removed and its four prospective neighbors x
i
are computed. A new vertex x
i
is inserted at the tail of the queue if the following conditions are satisfied: (i) vertex x
i
is inside the domain and (i i) vertex x
i
is not too close to any of the vertices that have already been inserted.
As for the first condition, it is enough to check if the preimage of x
i
is inside the bounds of the parameter domain. Concerning the second condition, the distances on the surface
should theoretically be measured in terms of geodesics, This is however clearly overkill from a mesh generation point of view. We define an exclusion zone for every vertex that has already been inserted (this includes boundary vertices). This exclusion zone is a parallelogram in the parameter plane (see the yellow parallelogram of Figure 9). This parallelogram is scaled down by a factor k=0.7 in order to allow the insertion of (at least) points x
i
. The different stages of the procedure for a non planar surface are presented on Figure 10 and Figure 11. Then, the surfaces are triangulated in the parameter plane using an anisotropic Delaunay kernel and the triangles are subsequently recombined into quadrilaterals using the Blossom-Quad algorithm [34].
As shown on Figure 11, exclusion areas can become anisotropic parallelograms in the parametric plane. However, they always correspond to squares in the three-dimensional space. The vertices are triangulated in the parametric plane. Anisotropic triangulation is therefore necessary in order to obtain the expected arrangement of right triangles.
Volume meshing: the 3D point insertion algorithm
Volume meshing proceeds in the same way as surface meshing. The procedure starts from a 2D triangular mesh that has been created using surfacic frame fields. A frontal algorithm is used to create well aligned vertices inside the volume, starting from surface points. The 3D point insertion algorithm works in the same manner as the one used for surfaces.
All boundary mesh vertices are initially pushed into a queue. The vertices are popped in order: each vertex Q popped out of the queue attempts to create six neighboring vertices in the six cardinal directions P1,2=Q±h d1, P3,4=Q±h d2, P5,6=Q±h d3 at a distance h from itself (see Figure 12).
A prospective vertex is added to the vertices cloud and to the queue only if it satisfies the two following conditions:
-
1.
It is inside the domain.
-
2.
It is not too close to an existing mesh vertex, i.e. if the distance is smaller than kh.
An octree data structure is again employed to efficiently determine if a vertex is inside the domain [11].
Eventually, no more prospective vertices can be added to the cloud without being too close to existing ones. The process then stops and the cloud is tetrahedralized with a Delaunay procedure [35].
The frontal algorithm was applied to the quarter cylinder starting from the surface mesh shown in Figure 13(a). In Figure 13(b), lines are traced between each vertex and its parent in order to observe the progression of the 3D point insertion algorithm.
The quality of the alignment inside the geometry is very dependent on the quality of the alignment on the boundaries. If the triangles on the boundaries are far from being right-angled, then the vertices inside the geometry will not be well aligned. Various algorithms are capable of generating sets of aligned vertices on surfaces, such as the Delquad algorithm [32] or Lévy-Liu’s algorithm [24, 36]. However, for the majority of the examples presented in this article, a two-dimensional version of the frontal algorithm was employed.
As explained earlier, each vertex attempts to create six other vertices at a distance d=h from itself. For smoother size transitions, d can instead be an average between the local mesh size at the parent vertex and the local mesh size at the prospective vertex.
Volume meshing: Yamakawa-Shimada’s algorithm and finite element conformity
This section briefly describes Yamakawa-Shimada’s recombination algorithm. It then discusses the problem of finite element conformity in the case of mixed hex meshes.
Yamakawa-Shimada’s algorithm begins by iterating through the tetrahedra of the initial mesh. For each tetrahedron, it attempts to find neighboring tetrahedra with which to construct a hexahedron. Five, six or seven tetrahedra are required to construct one hexahedron. Three patterns of assembly are considered. Two out of these three patterns are described in [9]. When a potential hexahedron is found, it is added to an array. However, the hexahedron will not necessarily be part of the final mesh. Once all tetrahedra have been visited, the array is sorted by hex quality.
The quality Q is defined as follows:
(8)
where i is the vertex number. For a hexahedron, i goes from 1 to 8; vi 1, vi 2 and vi 3 are the three vectors parallel to the three edges connected to vertex i. Q is in fact the modulus of the minimum scaled Jacobian [9]. Evidently, Q is meaningless for invalid hexahedra. Invalid hexahedra are characterized by a null or negative Jacobian determinant, which renders the mesh improper for calculations.
Starting from the highest quality hexahedron, the algorithm then iterates through the array. Potential hexahedra composed of tetrahedra not yet marked for deletion are added to the mesh. The tetrahedra of the added hexahedron are then marked for deletion. It is to be noted that only a small fraction of potential hexahedra appear in the final mesh.
Prisms can later be added by following a similar procedure [9]. All prisms are composed of three tetrahedra. There is only one pattern of construction for prisms [9].
Figure 14 shows a mixed mesh created with Yamakawa-Shimada’s algorithm.
Let’s assume that finite elements of the first order are employed. The tetrahedral shape functions are therefore linear, while the hexahedral shape functions are trilinear [37]. On triangular faces, the interpolation is linear and takes into account three degrees of liberty. On quadrilateral faces, the interpolation is bilinear and takes into account four degrees of liberty [8, 9]. If a nonplanar quadrilateral face is adjacent to a triangular face, there will be a gap or overlap [9]. The elements are not going to be a perfect partition of the domain anymore, which goes against the basic assumptions of the finite element method. Gaps or overlaps can also be created by several configurations of neighboring hexahedra or prisms. Figures 15 and 16 show four cases of non-conformities between hexahedra [9].
Figures 17 and 18 show six cases of non-conformities between one hexahedron and one prism. Non-conformities resulting from neighboring prisms can be deduced from these six cases.
Yamakawa-Shimada’s algorithm should therefore avoid creating the configurations illustrated on Figures 15, 16, 17 and 18 while iterating through the sorted arrays of potential hexahedra and prisms. Non-conformities can be efficiently identified by employing hashing techniques.
After the creation of hexahedra and prisms, some tetrahedra are recombined into pyramids. Every pair of tetrahedra resting on a quadrilateral face is merged to form a pyramid. This step can fix many non-conformities. However, it does not resolve them all. As shown on Figure 19, many quadrilateral faces can still be adjacent to triangular faces belonging to either tetrahedra or pyramids.
These non-conformities can be fixed by Owen-Canann-Saigal’s algorithm [38]. Owen-Canann-Saigal’s algorithm first creates a flat pyramid on each non-conformal quadrilateral face. The apex of the pyramid is not initially present in the mesh, but it is added by the algorithm. Surrounding tetrahedra and pyramids need to be subdivided to accommodate this new vertex. The pyramid is then raised so it does not have a null volume. Figure 20 illustrates the pyramid constructed to correct the non-conformity on Figure 19.
Owen-Canann-Saigal’s algorithm can render a mixed hexahedral mesh completely conformal. However, it has a drawback. It increases the number of tetrahedra and pyramids, which lowers the percentage of hexahedra by number. As a consequence, Owen-Canann-Saigal’s algorithm was not used for the results presented below.
Some quadrilateral faces will be adjacent to one or two triangles. Finite element solvers capable of handling these type of non-conformities are required.