# Multilevel preconditioners for embedded enriched partition of unity approximations

- Marc Alexander Schweitzer
^{1, 2}and - Albert Ziegenhagel
^{1}Email authorView ORCID ID profile

**5**:13

https://doi.org/10.1186/s40323-018-0107-6

© The Author(s) 2018

**Received: **9 January 2018

**Accepted: **30 April 2018

**Published: **18 May 2018

## Abstract

In this paper we are concerned with the non-invasive embedding of enriched partition of unity approximations in classical finite element simulations and the efficient solution of the resulting linear systems. The employed embedding is based on the partition of unity approach introduced in Schweitzer and Ziegenhagel (Embedding enriched partition of unity approximations in finite element simulations. In: Griebel M, Schweitzer MA, editors. Meshfree methods for partial differential equations VIII. Lecture notes in science and engineering, Cham, Springer International Publishing; 195–204, 2017) which is applicable to any finite element implementation and thus allows for a stable enrichment of e.g. commercial finite element software to improve the quality of its approximation properties in a non-invasive fashion. The major remaining challenge is the efficient solution of the arising linear systems. To this end, we apply classical subspace correction techniques to design non-invasive efficient multilevel solvers by blending a non-invasive algebraic multigrid method (applied to the finite element components) with a (geometric) multilevel solver (Griebel and Schweitzer in SIAM J Sci Comput 24:377–409, 2002; Schweitzer in Numer Math 118:307–28, 2011) (applied to the enriched embedded components). We present first numerical results in two and three space dimensions which clearly show the (close to) optimal performance of the proposed solver.

## Keywords

## Mathematics Subject Classification

## Introduction

The direct generalization and extension of the classical finite element method (FEM) to allow for the use of arbitrary non-polynomial basis functions as in partition of unity (PU) based approaches like XFEM/GFEM [1–5] usually requires a fair amount of implementational work within the original finite element (FE) code. Thus, the timely evaluation of novel generalizations of the FEM in large-scale industrial applications, which in general rely on commercial software packages, is usually not feasible. This issue can, however, be overcome with the help of the embedding approach presented in [6]. It allows for the non-invasive stable embedding of an arbitrary approximation space \(V_{\mathrm {ENR}}\) into a classical FE space \(V_{\mathrm {HOST}}\) and thereby enables the easy evaluation of novel generalizations of the FEM empolying arbitrary approximation functions in an industrial context. In [6] it was demonstrated that the approach is free from artifacts and yields substantial improvements in terms of accuracy. Note that this approach is very different from classical global–local techniques where you consider an independent auxilliary local problem. We, however, blend two function spaces \(V_{\mathrm {HOST}}\) and \(V_{\mathrm {ENR}}\) to discretize the global problem directly with a single larger function space \(V_{\mathrm {BND}}\) which comprises \(V_{\mathrm {HOST}}\) and \(V_{\mathrm {ENR}}\), compare also [7–9].

In this paper we are concerned with the construction of highly efficient solvers and preconditioners for the linear system arising from the discretization of the global problem with this blended function space \(V_{\mathrm {BND}}\). To this end, we are concerned with the non-invasive construction of multilevel preconditioners based on subspace correction methods [10] which are also referred to as Schwarz methods e.g. in the domain decomposition context [11]. We construct the coarsening process for \(V_{\mathrm {BND}}\) with the help of an available (geometric) multilevel structure of \(V_{\mathrm {ENR}}\) and a multilevel decomposition of \(V_{\mathrm {HOST}}\) obtained by an algebraic multigrid (AMG) method in a non-invasive fashion. The remainder of this paper is structured as follows: we first quickly introduce the mathematical foundation of our embedding approach, the partition of unity method (PUM), in “A partition of unity method for the embedding of arbitrary approximation spaces in finite element spaces” and summarize the actual embedding procedure. In “Subspace correction methods” we introduce efficient subspace corrections preconditioners for the linear system arising from the discretization of the global problem by our blended function space \(V_{\mathrm {BND}}\). The results of our numerical experiments with these preconditioners are presented in “Numerical results” before we conclude with some remarks in “Concluding remarks”.

## A partition of unity method for the embedding of arbitrary approximation spaces in finite element spaces

### Theorem 1

Let \(\Omega \subset \mathbb {R}^D\) be a Lipschitz domain. Let \(\{\varphi _i:i=1,\ldots ,N\}\) be an admissible non-negative partition of unity defined on patches \(\omega _i:={\text {supp}}(\varphi _i)\).

*N*. Let a collection of local approximation spaces \(V_i = {\text {span}}\langle \vartheta _i^m \rangle \subset H^1(\omega _i)\) be given. Let \(f \in H^1(\Omega )\) be the function to be approximated. Assume that the local approximation spaces \(V_i\) have the following approximation properties: On each patch \(\Omega \cap \omega _i\), the function

*f*can be approximated by a function \(v_i \in V_i\) such that

## Subspace correction methods

The computational effort associated with the solution of linear systems like (9) account for a very large part (often even the largest) of the overall computational cost in any implicit or stationary simulation. Thus, the development of efficient linear solvers is of great practical relevance and is still an active research field today. Even though classical general purpose numerical linear algebra techniques such as (sparse) matrix factorizations, see e.g. [15], are widely used in practice, it is well-known that their computational complexity is not optimal and that specialized iterative linear solvers are needed to tackle large scale problems with millions of unknowns efficiently.

A very sophisticated class of iterative methods which not only show an optimal scaling in the storage demand but also in the operation count are so-called multilevel iterative solvers or (geometric) multigrid methods which are particular instances of subspace correction methods [10]. Note, however, that these multilevel and multigrid solvers are not general algebraic methods but involve a substantial amount of information about the discretization and possibly the PDE. Thus, the introduction of such a (geometric) multilevel solver in a commercial software package is very much invasive and typically infeasible. However, there exist extensions of (geometric) multigrid methods, so-called algebraic multigrid methods (AMG) [16–19], which can be used as a non-invasive plugin solver also in commercial software [20, 21]. Such AMG solvers are successfully utilized in many different application fields yet they are essentially designed for classical mesh-based piecewise linear discretizations and thus are in general not directly applicable to discretizations with arbitrary approximation functions, i.e. \(V_{\mathrm {BND}}\) and \(V_{\mathrm {ENR}}\). Therefore, no optimal linear solver for (9) is readily available and we need to take the specific construction of our blended approximation space \(V_{\mathrm {BND}}\) into account when designing a respective iterative linear solver. To this end, we employ classical subspace correction techniques which can utilize splittings such as (2) and (10).

*V*, the PSC iteration reads

In classical multigrid terminology the approximate subspace solvers \(W_i\) in (18) are referred to as smoothers and the subspaces \(V_i\) correspond to the employed approximation spaces defined on different refinement levels of the underlying mesh, i.e. \(V_J=V\) denotes the finest discretization space and \(V_i\) with \(i<J\) are referred to as coarse spaces. The role of the smoothers \(W_i\) is to reduce high frequency error components whereas the corrections \(B_i (\hat{f} - K\tilde{u})\) obtained on the coarser levels should reduce low frequency errors so that all error frequencies are efficiently reduced in each iteration.

As a final component we need to specify the approximate subspace solvers or smoothers on the resulting coarse spaces \(V_{\mathrm {BND},i}\) to instantiate the iterations (15) and (17). In the following we focus on iterations of the form (17), in particular we employ the classical multigrid iteration \(M^{\nu _1, \nu _2}_\gamma \) given in Algorithm 3.1 and consider different numbers of smoothing steps \(\nu =\nu _1=\nu _2\) as well as the *V*-cycle (\(\gamma =1\)) and the *W*-cycle (\(\gamma =2\)).

## Numerical results

*n*that are independent of the number of employed levels

*k*. From the measured iteration numbers given in Table 1, we see that the

*V*-cycle stand-alone solver with only a single pre- and post-smoothing step already provides acceptable iteration numbers \(n_\mathrm {V(1,1)}< 45\) which, however, are not completely independent of the number of employed levels

*k*. Yet, increasing the number of smoothing steps to \(\nu =3\) or changing to the more expensive

*W*-cycle yields constant iteration numbers \(n_\mathrm {V(3,3)}\) and \(n_\mathrm {W(1,1)}\) independent of

*k*, see also Fig. 7. In fact, further experiments with the proposed multilevel iteration showed that it is already sufficient to increase the number of smoothing steps on coarser levels only. Thus, indicating that the quality of the corrections from coarser levels obtained in a

*V*(1, 1)-cycle is somewhat deminished for larger

*k*. Nevertheless, the use of the

*V*(1, 1)-cycle as a preconditioner in CG yields fairly stable iteration numbers \(n_\mathrm {CGV(1,1)} < 20\) and provides the fastest time-to-solution for the considered numbers of levels

*k*. Note that the results summarized in Table 1 were obtained with a small overlap region \(\Omega _\Phi \) of a single element on the finest level; i.e. the overlap region is in fact shrinking as we refine the discretization. As mentioned above, we anticipate that for a larger overlap region \(\Omega _\Phi \) with fixed volume for all levels

*k*, i.e. an increasing number of elements in the overlap as we refine, the convergence behavior of the proposed solver will deteriorate somewhat. In fact, the results given in Table 2 show that the number of iterations increases not only but also grows with larger number of levels

*k*even when we use the rather expensive

*W*(1, 1)-cycle as a preconditioner in CG. Thus, the results confirm our expectation that it is advisable to choose an overlap region \(\Omega _\Phi \) whose diameter is proportational to the meshwidth on the finest level employed (unlike in classical domain decomposition approaches) since such a choice yields the least amount of work in the assembly of the blended linear system \(K_{\mathrm {BND}}\) and it gives the best solver performance.

\(\varvec{k}\) | \({{\mathbf {{\small {\uppercase {Dof}}}}}}\) | \(\varvec{n_\mathrm {V(1,1)}}\) | \(\varvec{n_\mathrm {V(3,3)}}\) | \(\varvec{n_\mathrm {W(1,1)}}\) | \(\varvec{n_\mathrm {CGV(1,1)}}\) | \(\varvec{n_\mathrm {CGV(3,3)}}\) | \(\varvec{n_\mathrm {CGW(1,1)}}\) |
---|---|---|---|---|---|---|---|

3 | 1479 | 23 | 9 | 23 | 12 | 7 | 12 |

4 | 5386 | 22 | 11 | 22 | 12 | 8 | 12 |

5 | 20,249 | 19 | 12 | 19 | 12 | 9 | 11 |

6 | 79,681 | 22 | 15 | 18 | 12 | 10 | 11 |

7 | 315,046 | 26 | 16 | 17 | 14 | 11 | 11 |

8 | 1,245,303 | 38 | 22 | 18 | 17 | 12 | 11 |

9 | 4,958,241 | 43 | 22 | 18 | 18 | 12 | 11 |

\(\varvec{k}\) | \({{\mathbf {{\small {\uppercase {Dof}}}}}}\) | \(\varvec{n_\mathrm {CGV(1,1)}}\) | \(\varvec{n_\mathrm {CGV(3,3)}}\) | \(\varvec{n_\mathrm {CGW(1,1)}}\) |
---|---|---|---|---|

3 | 1475 | 30 | 20 | 30 |

4 | 5710 | 37 | 25 | 32 |

5 | 21,965 | 53 | 36 | 39 |

6 | 88,022 | 56 | 60 | 54 |

7 | 350,511 | 133 | 96 | 75 |

8 | 1,392,765 | 233 | 168 | 116 |

*V*(3, 3)-cycle substantially outperforms the

*W*(1, 1)-cycle which shows the improved smoothing property of the patch-based block-Gauß–Seidel relaxation in \(V_{\mathrm {ENR}}\). Nevertheless, the fastest time-to-solution for the considered discretizations with more than 13 million degrees of freedom is still obtained by CG preconditioned by

*V*(1, 1)-cycle which of course also benefits from the improved smoothing property.

\(\varvec{k}\) | \({{\mathbf {{\small {\uppercase {Dof}}}}}}\) | \(\varvec{n_\mathrm {CGV(1,1)}}\) | \(\varvec{n_\mathrm {CGV(3,3)}}\) | \(\varvec{n_\mathrm {CGW(1,1)}}\) |
---|---|---|---|---|

3 | 3,843 | 18 | 10 | 18 |

4 | 14,063 | 18 | 10 | 18 |

5 | 54,408 | 19 | 11 | 18 |

6 | 214,527 | 20 | 11 | 19 |

7 | 847,974 | 20 | 12 | 19 |

8 | 3,372,646 | 21 | 14 | 19 |

9 | 13,380,007 | 22 | 16 | 13 |

*W*(1, 1)-cycle and slightly increasing iteration numbers when using a

*V*-cycle preconditioner. Yet, the fastest time-to-solution is still attained when using the

*V*(1, 1)-cycle as preconditioner.

\(\varvec{k}\) | \({{\mathbf {{\small {\uppercase {Dof}}}}}}\) | \(\varvec{n_\mathrm {CGV(1,1)}}\) | \(\varvec{n_\mathrm {CGV(3,3)}}\) | \(\varvec{n_\mathrm {CGW(1,1)}}\) |
---|---|---|---|---|

3 | 15,666 | 17 | 11 | 17 |

4 | 106,482 | 19 | 14 | 18 |

5 | 774,738 | 22 | 17 | 17 |

6 | 5,883,282 | 24 | 20 | 16 |

In summary, the presented results clearly show that the proposed solver yields close to optimal convergence in two and three dimensions when using a small overlap. Using a small overlap is moreover beneficial to the total computational cost in the assembly of the blended linear system and still yields optimal approximation errors.

## Concluding remarks

In this paper we proposed a constructive non-invasive approach to the design of efficient multilevel solvers for embedded enriched approximations. The non-invasive embedding scheme is based on a partition of unity approach and can essentially blend arbitrary (overlapping) approximation spaces, yet we consider the special case of embedding an enriched partition of unity space into a classical finite element space. The proposed solver utilizes non-invasive algebraic multigrid technology [16–20] for the automatic construction of a sequence of coarser sub-spaces of the employed finite element space and a sequence of enriched partiton of unity spaces obtained via a geometric coarsening scheme [22]. The presented results clearly indicate that the proposed method can attain (close to) optimal convergence behavior when a small overlap or blending region is employed. A detailed study on the optimal selection of parameters and robustness properties of the proposed scheme is the subject of ongoing and future research.

## Declarations

### Author's contributions

MAS and AZ developed the method. AZ implemented the method and conducted the numerical experiments. All authors read and approved the final manuscript.

### Acknowledgements

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

### Availability of data and materials

Not applicable.

### Consent for publication

Not applicable.

### Ethics approval and consent to participate

Not applicable.

### Funding

Not applicable.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Babuška I, Melenk JM. The partition of unity method. Int J Numer Methods Eng. 1997;40:727–58.MathSciNetView ArticleMATHGoogle Scholar
- Belytschko T, Black T. Elastic crack growth in finite elements with minimal remeshing. Int J Numer Methods Eng. 1999;45:601–20.View ArticleMATHGoogle Scholar
- Duarte CA, Oden JT. An hp adaptive method using clouds. Comput Methods Appl Mech Eng. 1996;139:237–62.View ArticleMATHGoogle Scholar
- Fries T-P, Belytschko T. The extended/generalized finite element method: an overview of the method and its applications. Int J Numer Methods Eng. 2010;84:253–304.MathSciNetMATHGoogle Scholar
- Schweitzer M. Variational mass lumping in the partition of unity method. SIAM J Sci Comput. 2013;35:A1073–97.MathSciNetView ArticleMATHGoogle Scholar
- Schweitzer MA, Ziegenhagel A. Embedding enriched partition of unity approximations in finite element simulations. In: Griebel M, Schweitzer MA, editors. Meshfree methods for partial differential equations VIII., Lecture notes in science and engineeringCham: Springer International Publishing; 2017. p. 195–204.Google Scholar
- Bacuta C, Xu J. Partition of unity for the Stokes problem on nonmatching grids. In: Proceedings of the 2003 copper mountain conference on multigrid. 2003.Google Scholar
- Bakuta C, Chen J, Huang Y, Xu J, Zikatanov L. Partition of unity method on nonmatching grids for the Stokes problem. J Numer Math. 2005;13:157–69.MathSciNetView ArticleMATHGoogle Scholar
- Gupta P, Pereira J, Kim D-J, Duarte C, Eason T. Analysis of three-dimensional fracture mechanics problems: a non-intrusive approach using a generalized finite element method. Eng Fract Mech. 2012;90:41–64.View ArticleGoogle Scholar
- Xu J. Iterative methods by space decomposition and subspace correction. SIAM Rev. 1992;34:581–613.MathSciNetView ArticleMATHGoogle Scholar
- Smith BF, Bjørstad PE, Gropp WD. Domain decomposition: parallel multilevel methods for elliptic partial differential equations. Cambridge: Cambridge University Press; 1996.MATHGoogle Scholar
- Babuška I, Melenk JM. The partition of unity finite element method: basic theory and applications. Comput Methods Appl Mech Eng. 1996;139:289–314 (Special Issue on Meshless Methods).MathSciNetView ArticleMATHGoogle Scholar
- Babuška I, Caloz G, Osborn JE. Special finite element methods for a class of second order elliptic problems with rough coefficients. SIAM J Numer Anal. 1994;31:945–81.MathSciNetView ArticleMATHGoogle Scholar
- Schweitzer MA. Generalizations of the finite element method. Cent Eur J Math. 2012;10:3–24.MathSciNetView ArticleMATHGoogle Scholar
- Amestoy PR, Duff IS, Koster J, L’Excellent JY. A fully asynchronous multifrontal solver using distributed dynamic scheduling. SIAM J Matrix Anal Appl. 2001;23:15–41.MathSciNetView ArticleMATHGoogle Scholar
- Brandt A. Algebraic multigrid theory: the symmetric case. Appl Math Comput. 1986;19:23–56.MathSciNetMATHGoogle Scholar
- Brandt A, McCormick S, Ruge J. Algebraic multigrid (AMG) for sparse matrix equations. Sparsity and its applications (Loughborough, 1983). Cambridge: Cambridge Univ. Press; 1985. p. 257–84.Google Scholar
- Ruge J, Stüben K. Efficient solution of finite difference and finite element equations. In: Multigrid methods for integral and differential equations (Bristol, vol. 3 of Inst. Math. Appl. Conf. Ser. New Ser.) New York: Oxford Univ. Press; 1983, 1985. p. 169–212.Google Scholar
- Stüben K. A review of algebraic multigrid. J Comput Appl Math. 2001; 128:281–309. Numerical analysis, Partial differential equations: VII; 2000.Google Scholar
- SAMG—efficiently solving large linear systems of equations. https://www.scai.fraunhofer.de/en/business-research-areas/fast-solvers/products/samg.html.
- Stüben K, Ruge JW, Clees T, Gries S. Algebraic multigrid: from academia to industry. In: Griebel M, Schüller A, Schweitzer MA, editors. Scientific computing and algorithms in industrial simulation—projects and products of Fraunhofer SCAI. Cham: Springer International Publishing; 2017. p. 83–120.View ArticleGoogle Scholar
- Griebel M, Schweitzer MA. A particle-partition of unity method—part III: a multilevel solver. SIAM J Sci Comput. 2002;24:377–409.MathSciNetView ArticleMATHGoogle Scholar
- Schweitzer MA. A parallel multilevel partition of unity method for elliptic partial differential equations., Lecture notes in computational science and engineeringBerlin: Springer; 2003.View ArticleMATHGoogle Scholar
- Schweitzer MA. Stable enrichment and local preconditioning in the particle-partition of unity method. Numer Math. 2011;118:137–70.MathSciNetView ArticleMATHGoogle Scholar
- Schweitzer MA. Multilevel particle-partition of unity method. Numer Math. 2011;118:307–28.MathSciNetView ArticleMATHGoogle Scholar