Skip to main content

Table 5 Theoretical complexity per time step, where \(n\) represents the subspace dimension, \(p\) represents the parameter dimension, and \(N_\text {training}\) represents the number of training points

From: Non-intrusive nonlinear model reduction via machine learning approximations to low-dimensional operators

Method

Complexity

Comments

SVR 2

O(\((n+ p) nN_\text {training}\))

 

SVR 3

O(\((n+ p) nN_\text {training}\))

 

SVR rbf

O(\((n+ p) nN_\text {training}\))

 

Random Forest

O(\(nN_\text {trees} N_\text {training}\log (N_\text {training})\))

\(N_\text {trees}\): number of decision trees

Boosting

O(\(nN_\text {learners}\))

\(N_\text {learners}\): number of weak learners

kNN

O(\((n+ p) nN_\text {training}+ K nN_\text {training}\))

K: number of nearest neighbors

VKOGA

O(\((n+ p) N_\text {functions}\))

\(N_\text {functions}\): number of kernel functions

SINDy2

O(\(nN_\text {bases}\))

\(N_\text {bases}\): number of bases, \(N_\text {bases} < n^2\)