PEBLROM: Projectionerror based local reducedorder models
 David Amsallem†^{1}Email author and
 Bernard Haasdonk†^{2}
DOI: 10.1186/s4032301600597
© Amsallem and Haasdonk. 2016
Received: 16 October 2015
Accepted: 3 February 2016
Published: 6 March 2016
Abstract
Projectionbased model order reduction (MOR) using local subspaces is becoming an increasingly important topic in the context of the fast simulation of complex nonlinear models. Most approaches rely on multiple local spaces constructed using parameter, time or statespace partitioning. Statespace partitioning is usually based on Euclidean distances. This work highlights the fact that the Euclidean distance is suboptimal and that local MOR procedures can be improved by the use of a metric directly related to the projections underlying the reduction. More specifically, scaleinvariances of the underlying model can be captured by the use of a true projection error as a dissimilarity criterion instead of the Euclidean distance. The capability of the proposed approach to construct local and compact reduced subspaces is illustrated by approximation experiments of several data sets and by the model reduction of two nonlinear systems.
Keywords
Model order reduction Reduced basis methods Local basesBackground
Projectionbased modelorder reduction (MOR) is an indispensable tool for accelerating largescale computational procedures and enabling their solutions in realtime. This class of approaches proceeds by restricting the solution to a subspace of the entire solution space, resulting in a much smaller set of equations. Many problems, however, are characterized by distinct physical regimes within a given simulation. Among those, one can mention the transition from laminar to turbulent flows, bifurcation of solutions and moving features such as shocks and discontinuities. These simulations are particularly difficult to reduce using classical projectionbased MOR as they may require the projection onto large subspaces. These considerations have motivated the recent development of novel local model reduction approaches in which smaller local subspaces are defined and the reducedorder models marches from one subspace to another one within each single simulation [1–3]. Local subspaces can be defined in time [1, 2], parameter space [4–6], solution features [7] or statespace [3, 6, 8–10].
In the local MOR context, many approaches are based on a notion of distance in order to (1) partition solutions and construct local subspaces offline and (2) determine online which subspace is currently used to define the reduced order model (ROM) solution. Although the choice of distance measure is particularly important in these procedures, this choice has not been yet the subject of detailed studies. More specifically, most approaches are based on the Euclidean distance and this choice may be suboptimal as a dissimilarity measure in the context of local MOR. For instance, a Euclidean distance defined in time typically fails at recognizing periodic phenomena as well as phase shift. Similarly, a basis selection using Euclidean or anisotropic distances in the parameter space cannot identify cases where different parameters lead to identical solutions. On the other hand, a Euclidean distance in the statespace is able to recognize the two aforementioned classes of phenomena leading to similar or identical solutions. However, a Euclidean distance in the statespace does not recognize the linear nature of projections. More specifically, if two snapshots are scaled versions of each other, they can be captured by a unique lowdimensional subspace but the two snapshots may be very distant in the statespace when the measure of distance is the Euclidean norm.
These considerations underline the fact that current local MOR procedures may result in approximating local subspaces that are suboptimal or redundant, leading to unnecessarily large reduced bases. In the present work, a novel local MOR approach is presented. It closely follows the general locality in statespace approach developed in [3, 8, 9], but is here based on the true projection error as a natural dissimilarity measure. The proposed approach both reflects the nature of approximation in linear spaces, as well as explicitly captures effects of scaleinvariance in models. It is based on an extension of the hpRB approach [11–13] now using the true projection error as a partitioning criterion for a given set of snapshots. The procedure partitions the set of snapshot by the construction of a binary tree structure. Each leaf is a cluster of snapshots which is subsequently reduced by proper orthogonal decomposition (POD).
This paper is organized as follows. In the next section the proposed projection error based local ROM approach, PEBLROM, is developed and is compared to the kmeans based local ROM procedure, KMLROM. Numerical experiments are conducted in the subsequent section, highlighting the capability of the proposed PEBLROM approach to construct small and optimal local reducedorder models. In particular, approximation experiments on toy and real simulation data are presented together with MOR results for two nonlinear dynamical systems. Finally, conclusions are given in the last section.
Methods
Data approximation and nonlinear MOR with local bases
 1.
Collection of snapshots from training simulations.
 2.
Clustering of the snapshots into k clusters.
 3.
Construction of a local reduced basis for each cluster using POD.
 4.
Construction of a ROM for each cluster.
Projectionerror based local ROM (PEBLROM)
In this section, a new approach for local MOR using the true projection error is proposed as a variant of the hpRB approach [11, 12] in combination with POD. Note that other types of error measures already have been used in partitioning procedures, i.e., an RB error estimator in the hpRB approach [11] or the empirical interpolation error in the implicit partitioning approach for function approximation [10].
The offline phase of the PEBLROM procedure consists of two stages and is summarized in the pseudocode of Algorithm 1. As input quantities, the proposed algorithm requires the set of snapshots to be processed and accuracy thresholds for the bisection procedure and POD be specified. In stage 1 of the algorithm, a binary tree structure is constructed. Its nodes consist of anchor points that are a subset of the training snapshots. This tree is associated to a nonregular consecutive bisection of the state space. The bisection is defined by comparing the projection errors of new vectors to the corresponding 1D spaces spanned by the anchor points. This partitioning of the state space therefore defines a partitioning of the training snapshots. In stage 2 of the procedure, local bases are generated by POD separately applied to each of the leaf snapshot sets.
The practical choice of \(\varepsilon _{bisect}\) is problem dependent. One way of motivated choice for this parameter is realizing the monotonicity of the map \(\varepsilon _{bisect} \mapsto k\) by the PEBLROM offline phase, and choose \(\varepsilon _{bisect}\) via the number of desired clusters \(k_{d}\). This means one can start from some extremely large value (resulting in \(k=1\)) and some close to zero value for \(\varepsilon _{bisect}\) (resulting in \(k=n_{u}\)), perform some (logarithmical) interval division algorithm while repeatedly performing the offline phase to detect the parameter \(\varepsilon _{bisect}\) that gives \(k=k_d\). We also adopted to this procedure in our experiments.
In the case of the MOR for instationary dynamical systems, switching between clusters must be ensured. It is obvious that with the fully refined tree and resulting 1D spaces, the space resulting from a given query snapshot will exactly be the 1D space consisting of that snapshot. Such a dynamic simulation will never result in a different space and no switching will occur. Therefore, in the MOR framework and following [3, 8, 9], a variant of the algorithm will be considered in which all local snapshot sets used for POD will be based on increments of snapshots at the point considered. This means, that the anchor points are still selected from state snapshots, but the local ROB are generated from a POD of the corresponding increment snapshots, where an increment snapshot is simply the difference between two consecutive snapshots of the dynamic simulation. The use of increment snapshots is demonstrated theoretically and in practice in [3, 8, 9] for MOR using local bases.
As mentioned in the previous section, in case of nonlinear ROM simulation, hyperreduction needs to be performed in order to obtain computational acceleration. First, if using a global hyperreduction Ansatz (i.e., single global sample mesh for GNAT, or single collateral basis set and interpolation points for DEIM), no changes in Algorithm 1 or 2 are required. as they only address the (Galerkin) projection stage, but not the nonlinearity approximation. However, the use of local hyperreduction (i.e., local interpolation bases, submeshes, etc.) would require essential extensions of the offline and online phases. We refrain from detailed presentation of these extensions, as we do not make use of that in the experiments, but the extensions can be obtained by following the ideas of [9].
kmeans local ROM (KMLROM)
Similarly as for the PEBLROM approach, two use cases of this algorithm can be distinguished. First, for approximation experiments, the POD is applied to state snapshots. Second, for dynamic MOR experiments, the POD is applied to local increment snapshots and not the state snapshots themselves.
Conceptional discussion
The POD in its elementary definition reflects the linear approximation nature of MOR by minimizing the mean true projection error of a given set of snapshots. Therefore, the PEBLROM procedure seems a natural extension in the context of local projectionbased approximation. The PEBLROM approach fully reflects the projection nature of the approximation task both in the partitioning, the local space construction as well as the online partition selection.
Some remarks can be made when comparing the PEBLROM and the KMLROM algorithms. First some limiting cases can be considered: In the case of \(k=n_u\), both procedures generate the maximum number of k clusters and optimal 1D approximation spaces allowing the training error to be zero. This highlights the asymptotic optimality of both approaches.
Further, a remark concerning the computational complexity can be made. The treestructure allows a traversal of the local spaces with a lower computational complexity (logarithmic complexity for a perfectly balanced tree and linear complexity in the worst case) when compared to the linear search in a cluster list generated by kmeans. Still, as typical values of k are usually modest in the local MOR context, a large CPU discrepancy at the traversal level should not be expected and is not observed in practice.
A final remark can be made about the nestedness property of the local bases. The PEBLROM procedure results in a hierarchical partitioning of the training snapshots. Indeed, a fine binary tree can be coarsened by merging children nodes at the parent node level. This constitutes an advantage over the KMLROM procedure for which the clusters are not nested when varying k. In the case of the PEBLROM procedure, the local ROBs are only nested themselves when \(\varepsilon _{POD}\) is small enough to result in no truncation of the snapshots space.
Results and discussion
Approximation of toy data
In the first set of experiments, the properties of the algorithms are illustrated on artificially generated data of random clouds in \(\mathbb {R}^n\) for \(n=1000\).
The first unimodal dataset consists of 500 points drawn from a single normal random distribution. The mean is set to \(\varvec{0}\) and the covariance matrix is diagonal with variance \(0.1 e^{10} \) in the first two dimensions, then exponentially decaying as \(0.1 e^{10(2i)}\) for \(i=2,\ldots ,n\).
The second multimodal dataset consists of a mixture of four normal distributions, each with the same covariance matrix as the unimodal dataset, but different mean values. From each of the four normal distributions, 100 points are drawn.
As expected, the number of parts is increasing with lower bisection tolerances. Also, one can observe (despite the differing colors) that the partitions are hierarchical in the sense that a coarser anchor point set is a subset of the refined anchor set. Hence, each part of the refined partition always is completely contained in one part of a coarser partition of state space. The partitioning is based on the true projection error, which is reflected in the fact that all clusters are geometrically doublecones centered in the origin. This illustrates the scale invariance of the parts. In particular, points at the opposite side of an anchor point are assigned to the cluster of that anchor point, although these points are maximally distant to this anchor point with respect to the Euclidean distance. Hence, the projection error has a completely different characteristic as the Euclidean distance. As samples with current worst projection error are chosen as new anchor point, it is understandable that these tend to lie at the boundary of the point set and not in the interior.
Plot a and c again confirm the insights obtained from the previous refinement experiment, now with slightly larger number of parts \(k=7\) and \(k=14\). In contrast to this, plot b and d illustrate the training set partitions obtained by the KMLROM algorithm. One can observe how the cluster centers for the kmeans based procedure tend to be distributed uniformly with respect to the Euclidean distance. The clusters are actually Voronoi cells of a corresponding Voronoi partitioning. With increasing target cluster number k, the clusters are not nested but rather independent. The rather “circular” shape of the clusters of the KMLROM algorithm in contrast to the “lengthy” clusters in the PEBLROM procedure might indicate that these kmeans clusters require more basis vectors than the clusters obtained from the treebased procedure. This will indeed be visible with subsequent approximation experiments. Comparing the partitions on a test set of regularly distributed points over a considerably larger square domain in plot e and f reveals that the PEBLROM procedure makes full use of the k different clusters in the far field, while the kmeans algorithm only uses fewer number of clusters in the outer regions, as some clusters are bounded and compact and completely lying in the range of the original training set. This motivates the expectation that the PEBLROM procedure might be better generalizing on solution regimes, which have not been included in the training data (e.g., scaled snapshots).
Approximation of Burgers equation data
In the subsequent numerical experiments, three training simulations are conducted for the three parameter values \(u_{BC}\in \{2,3,5\}\). The accuracy of the local model reduction methods will then be assessed for those three conditions as well as three additional testing parameter values \(u_{BC}\in \{1.5,4,5.5\}\).
The use of the resulting local bases in MOR will be dealt with later. Here the approximation properties of the local bases based on the true projection error are investigated.
Results of the offline phases for the approximation of Burgers data
PEBLROM  KMLROM  

Number of bases  12  12 
Sum of overall basis sizes  187  301 
Maximum basis size  68  54 
Min of basis sizes  4  4 
Mean of basis sizes  15.5833  25.0833 
Variance of basis sizes  295.3561  217.9015 
While varying the number of local bases (by varying k for the KMLROM, and by choosing \(\varepsilon _{bisect}\) for the PEBLROM procedure), the results obtained are summarized in Fig. 7. Both approaches result in training errors below the POD accuracy \(\varepsilon _{POD} = 10^{5}\) confirming the training stage correctness. Otherwise, the approaches are very comparable, the PEBLROM perhaps being slightly more accurate.
However, this relation becomes much more expressive, if considering a predictive scenario. In the predictive context, analogous experiments are performed using the set of test snapshots. The results are given in Fig. 8. In a, the relative summed squared test error performance is reported as a function of the number of local bases. In b the error is plotted over the average local basis size. One can observe that the PEBLROM procedure clearly outperforms the KMLROM algorithm by almost one order of magnitude in the relative squared error. This relation is even more clear in case of a small number of local bases or a higher average local basis size. Inspecting the diagram more carefully indicates an increase of the testerror for the PEBLROM with increasing number of local bases. We expect that this indicates an overfitting effect, as the training error in the previous figure is simultaneously decreasing.
Overall, it can be concluded from these numerical experiments that the PEBLROM procedure provides more compact approximation models in the sense of ROB size versus test error. This is due to the expected scaling properties of the Burgers snapshots. This scaling invariance is captured by the true projection error, while it is overseen by the Euclidean distance.
Nonlinear MOR for the Burgers equation
Now, the use of the local reduced bases is investigated in dynamical problems for reduced order simulations. The experiments here are exactly using the same trajectory snapshots from the Burgers model as in the previous section. As explained in the method section clustering is performed on snapshot increments of the training trajectories.
One can observe that, for small values of \(\varepsilon _{POD}\), the PEBLROM approach generally results in more accurate reducedorder models than its kmeans counterpart, both for training (top row) and testing parameters (bottom row). Figure 10 shows the error as a function of the average basis size. Again, the PEBLROM approach leads to more accurate ROMs.
In a second set of numerical experiments, the truncated POD energy level is fixed to \(\varepsilon _{POD}=10^{8}\). In that case, the number of local bases is varied from \(k=2\) to 10 and the KMLROM approach compared to the PEBLROM approach. Figure 11 depicts the error as a function of the number of local ROBs. Again, the PEBLROM method leads to more accurate reducedorder models. Figure 12 reports the average ROB size as a function of the number of local bases. It can be observed that the PEBLROM approach leads to smaller bases for the same truncation criterion \(\varepsilon _{POD}\). This is confirmed by inspecting Fig. 13, where the error is reported as a function of the average ROB dimensionality. The PEBLROM approach leads to both smaller and more accurate ROBs.
Nonlinear MOR for a chemical reaction problem
The PDE is discretized by the finite differences method in space, resulting in a solution vector of dimension \(n=23,104\) and by backward Euler finite differences in time with uniform time step \(dt=6\times 10^{4}\) s.

T1 for which \(A = 7\)

T2 for which \(A = 10\)

P1 for which \(A = 8.5\)
The POD truncation is set to \(\varepsilon _{POD}=10^{12}\) and the number of local bases is varied from \(k=2\) to 10. The accuracy of local ROMs obtained with the two approaches (KMLROM with clustering overlap of \(f_{\text {add}}=10\,\%\) and PEBLROM) are then computed in each case and reported in Fig. 17 for configurations T1 and P1. One can observe that the KMLROM algorithm is more accurate for the training configuration with an average error of \(10^{4}\), versus 0.015 % for the PEBLROM approach. However, all models are very accurate here.
Conclusions
A PEBLROM approach for local nonlinear model reduction is presented in this work. It relies on a dissimilarity measure defined as the true projection error. The approach proceeds by building offline a binary tree that is used to determine online the local ROB of interest. On a set of toy data, numerical experiments verify that the projectionerror based partitioning creates partitions that are independent of intuitive “Euclidean” cluster structure. In the approximation this is reflected in segments being doublecones instead of Voronoi tessellation for a KMLROM approach. The projectionerror partition generates large “generalization” regions outside of any training samples. The clusters are naturally scale invariant, nicely fitting to the projection nature and not available for other local basis approaches so far. In addition to these approximation experiments, MOR experiments are also performed, illustrating the capability of the proposed PEBLROM approach to generate accurate local reduced bases in dynamical simulations that are more robust to changes in parameters than existing approaches. Overall we see a very good performance of the PEBLROM over the KMLROM. The situations, where the former is inferior to the latter are mainly situations of large k (i.e., small sets of snapshots per subset) and regions of high POD truncation value. Both situations are not considered to be of major relevance, as accurate ROMs, i.e., models with low POD truncation value are of practical interest. Also the large k (i.e., the case where the ratio of k over the number of snapshots is getting close to 1 is not of interest, as in the limit this would imply clusters of single training snapshots, where all clustering procedures and subspaces coincide.
Notes
Declarations
Authors' contributions
Both authors have contributed equally to the manuscript both in writing and the numerical experiments. Both authors read and approved the final manuscript.
Acknowledgements
The first author would like to acknowledge partial support by the Army Research Laboratory through the Army High Performance Computing Research Center under Cooperative Agreement W911NF 0720027, and partial support by the Office of Naval Research under grant no. N000141110707. The second author wants to acknowledge the BadenWürttemberg Stiftung gGmbH for funding as well as the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Both authors would like to thank the respective travel grants that have made this collaboration possible.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Dihlmann M, Drohmann M, Haasdonk B. Model reduction of parametrized evolution problems using the reduced basis method with adaptive timepartitioning. In: Proc. of ADMOS 2011, International Conference on Adaptive Modeling and Simulation. 2011.Google Scholar
 Drohmann M, Haasdonk B, Ohlberger M. Adaptive reduced basis methods for nonlinear convectiondiffusion equations. In: Proc. FVCA6, Finite Volumes and Complex Applications. 2011.Google Scholar
 Amsallem D, Zahr M, Farhat C. Nonlinear model order reduction based on local reducedorder bases. Int J Numerical Methods Eng. 2012;92(10):891–916.MathSciNetView ArticleGoogle Scholar
 Haasdonk B, Dihlmann M, Ohlberger M. A training set and multiple basis generation approach for parametrized model reduction based on adaptive grids in parameter space. MCMDS. 2011;17:423–42.MathSciNetView ArticleMATHGoogle Scholar
 Maday Y, Stamm B. Locally adaptive greedy approximations for anisotropic parameter reduced basis spaces. SIAM J Sci Comput. 2013;35(6):2417–41.MathSciNetView ArticleMATHGoogle Scholar
 Peherstorfer B, Butnaru D, Willcox K, Bungartz HJ. Localized discrete empirical interpolation method. SIAM J Sci Comput. 2014;36(1):168–92.MathSciNetView ArticleMATHGoogle Scholar
 Redeker M, Haasdonk B. A PODEIM reduced twoscale model for crystal growth. Adv Comput Math. 2014;1–27. doi:10.1007/s104440149367y.
 Washabaugh K, Amsallem D, Zahr MJ, Farhat C. Nonlinear model reduction for CFD problems using local reduced order bases. AIAA Paper 2012–2686, 42nd AIAA Fluid Dynamics Conference and Exhibit 25–28, New Orleans. Louisiana. 2012;1–16.Google Scholar
 Amsallem D, Zahr MJ, Washabaugh K. Fast local reduced basis updates for the efficient reduction of nonlinear systems with hyperreduction. Special issue on Model Reduction of Parameterized Systems (MoRePaS). Adv Comput Math. 2015;1–34.Google Scholar
 Wieland B. Implicit partitioning methods for unknown parameter sets. Adv Comput Math. 2015;41:1159–86.MathSciNetView ArticleMATHGoogle Scholar
 Eftang JL, Patera AT, Rønquist EM. An \(hp\) certified reduced basis method for parametrized elliptic partial differential equations. SIAM J Sci Comput. 2010;32(6):3170–200.MathSciNetView ArticleMATHGoogle Scholar
 Eftang JL, Knezevic DJ, Patera AT. An \(hp\) certified reduced basis method for parametrized parabolic partial differential equations. MCMDS. 2011;17(4):395–422.MathSciNetView ArticleMATHGoogle Scholar
 Eftang J, Stamm B. Parameter multidomain \(hp\) empirical interpolation. Int J Numerical Methods Eng. 2012;90(4):412–28.MathSciNetView ArticleMATHGoogle Scholar
 Grepl MA, Patera AT. A posteriori error bounds for reducedbasis approximations of parametrized parabolic partial differential equations. ESAIM. 2005;39(1):157–81.MathSciNetView ArticleMATHGoogle Scholar
 Kunisch K, Volkwein S. Optimal snapshot location for computing POD basis functions. ESAIM. 2010;44(3):509–29.MathSciNetView ArticleMATHGoogle Scholar
 PaulDuboisTaine A, Amsallem D. An adaptive and efficient greedy procedure for the optimal training of parametric reducedorder models. Int J Numerical Methods Eng. 2015;102(5):1262–92.MathSciNetView ArticleGoogle Scholar
 Berkooz G, Holmes P, Lumley JL. The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech. 1993;25:539–75.MathSciNetView ArticleGoogle Scholar
 Jolliffe IT. Principal component analysis. BerlinHeidelberg:Springer; 2002. doi:10.1007/b98835.
 Volkwein S. Proper orthogonal decomposition: theory and reducedorder modelling. 2012. doi:http://www.math.unikonstanz.de/numerik/personen/volkwein/teaching/PODBook.
 LeGresley PA, Alonso JJ. Airfoil design optimization using reduced order models based on proper orthogonal decomposition. AIAA Paper 2000–2545 Fluids, Conference and Exhibit, Denver. CO. 2000;1–14.Google Scholar
 Barrault M, Maday Y, Nguyen NC, Patera AT. An “empirical interpolation” method: application to efficient reducedbasis discretization of partial differenti al equations. Comptes Rendus de l’Académie des Sciences, Series. 2004;I(339):667–72.Google Scholar
 Chaturantabut S, Sorensen D. Nonlinear model reduction via discrete empirical interpolation. SIAM J Sci Comput. 2010;32(5):2737–64. doi:10.1137/090766498.MathSciNetView ArticleMATHGoogle Scholar
 Carlberg K, Farhat C, Cortial J, Amsallem D. The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47. doi:10.1016/j.jcp.2013.02.028.MathSciNetView ArticleMATHGoogle Scholar
 Rewienski M. A trajectory piecewiselinear approach to model order reduction of nonlinear dynamical systems. Ph.D. thesis, Massachussets Institute of Technology. 2003.Google Scholar
 Buffoni M, Willcox K. Projectionbased model reduction for reacting flows. AIAA Paper 2010–5008, 40th Fluid Dynamics Conference and Exhibit, 28 June1–July 2010, Chicago, IL. 2010.Google Scholar