Some robust integrators for large time dynamics

This article reviews some integrators particularly suitable for the numerical resolution of differential equations on a large time interval. Symplectic integrators are presented. Their stability on exponentially large time is shown through numerical examples. Next, Dirac integrators for constrained systems are exposed. An application on chaotic dynamics is presented. Lastly, for systems having no exploitable geometric structure, the Borel-Laplace integrator is presented. Numerical experiments on Hamiltonian and non-Hamiltonian systems are carried out, as well as on a partial differential equation. Keywords: Symplectic integrators, Dirac integrators, long-time stability, Borel summation, divergent series.


Introduction
In many domains of mechanics, simulations over a large time interval are crucial. This is, for instance, the case in molecular dynamics, in weather forecast or in astronomy. While many time integrators are available in literature, only few of them are suitable for large time simulations. Indeed, many numerical schemes fail to correctly predict the expected physical phenomena such as energy preservation, as the simulation time grows.
For equations having an underlying geometric structure (Hamiltonian systems, variational problems, Lie symmetry group, Dirac structure, . . . ), geometric integrators appear to be very robust for large time simulation. These integrators mimic the geometric structure of the equation at the discrete scale.
The aim of this paper is to make a review of some time integrators which are suitable for large time simulations. We consider not only equations having a geometric structure but also more general equations. We first present symplectic integrators for Hamiltonian systems. We show in "Symplectic integrators" section their ability in preserving the Hamiltonian function and some other integrals of motion. Applications will be on a periodic Toda lattice and on n-body problems. To simplify, the presentation is done in canonical coordinates.
In "Dirac integrators" section, we show how to fit a constrained problem into a Dirac structure. We then detail how to construct a geometric integrator respecting the Dirac structure. The presentation will be simplified, and the (although very interesting) theoretical geometry is skipped. References will be given for more in-depth understanding. A numerical experiment, showing the good long time behaviour of Dirac integrators, will be carried out.
In "Borel-Laplace integrator" section, we present the Borel-Padé-Laplace integrator (BPL). BPL is a general-purpose time integrator, based on a time series decomposition of the solution, followed by a resummation to enlarge the validity of the series and then reducing the numerical cost on a large time simulation. Finally, the long time behaviour will be investigated through numerical experiments on Hamiltonian and non-Hamiltonian system, as well as on a partial differential equation. Numerical cost will be examined when relevant.

Symplectic integrators
We first make some reminder on Hamiltonian systems and their flows in canonical coordinates. Some examples of symplectic integrators are given afterwards and numerical experiments are presented.

Hamiltonian system
A Hamiltonian system in R d ×R d is a system of differential equations which can be written as follows: the Hamiltonian H being a function of time and of the unknown vectors q = (q 1 , . . . , q d ) T and p = (p 1 , . . . , p d ) T . Equation (1) can be written in a more compact way as follows: where u = (q, p) T , ∇H = ∂H ∂u and J is the skew-symmetric matrix The flow of the Hamiltonian system (2) at time t is the function t which, to an initial condition u 0 associates the solution u(t) of the system. More precisely, t is defined as: The property that J −1 = J T = −J leads to the symplecticity property of t : Note that J can be seen as an area form, in the following sense. If v and w are two vectors In words, v T J w is the sum of the areas formed by v and w in the planes (q i , p i ). The symplecticity property (4) then means that the flow of a Hamiltonian system is area preserving.
In the sequel, H is assumed autonomous in time. It can then be shown that H is preserved along trajectories.

Flow of a numerical scheme
Consider a numerical scheme which computes an approximation u n of the solution u(t n ) of Eq. (2) at time t n . The flow of this scheme is defined as For a one-step integrator, with a time step t = t n+1 − t n (which may depend on n), it is more convenient to work with the one-step flow As an example, consider the explicit Euler integration scheme The one-step flow of this scheme is The one-step flow of a fourth order Runge-Kutta scheme is where

Some symplectic integrators
A time integrator is called symplectic if its flow is symplectic, meaning that Geometrically, a symplectic integrator is then a time scheme which preserves the area form. For a one-step scheme, this property is equivalent to at each iteration n. When d = 1, it can easily be shown that, for an explicit Euler scheme, meaning that the explicit Euler scheme is not symplectic. Neither the implicit Euler scheme is symplectic. By contrast, by mixing the explicit and the implicit Euler scheme, we get a symplectic scheme, called symplectic Euler scheme, defined as follows Note that in (12), one can take (q n+1 , p n ) in the right-hand-side instead of (q n , p n+1 ). This leads to the other symplectic Euler scheme. Both symplectic Euler schemes are first order. A way to get a second order scheme is to compose two symplectic Euler schemes with time steps t/2. One then obtains the Störmer-Verlet schemes [13]. Another way is to take the mid-points in the right-hand side of an Euler scheme [15], yielding the mid-point scheme: where Symplectic Runge-Kutta schemes of higher order can be built as follows. An s-stage Runge-Kutta integrator of Eq. (1) is defined as [14,29]: for some real numbers α ij , β i , i, j = 1, . . . , s. Scheme (14) is symplectic if and only if the coefficients verify the relation [18,28]: An example of symplectic Runge-Kutta scheme is the 4th order, 3-stage scheme defined by the coefficients Many other variants of symplectic Runge-Kutta methods can be found in the literature (see for example [10]). Symplectic integrators do not preserve exactly the Hamiltonian in general. However, the symplecticity condition seems to be strong enough that, experimentally, symplectic integrators exhibit a good behaviour toward the preservation property. In fact, we have the following error estimation on the Hamiltonian [1,12]: for some constant γ > 0, r being the order of the symplectic scheme. This relation states that the error is bounded over an exponentially long discrete time. Moreover, it was shown in [5] that symplectic Runge-Kutta methods preserve exactly quadratic invariants.
In the next subsection, some interesting numerical properties of symplectic schemes are highlighted through some model problems.

Periodic Toda lattice
The evolution of a periodic Toda lattice with d particles can be described with the Hamiltonian function where q k is the (one-dimensional) position of the k-th particle, q d+1 = q 1 and p k is its momentum. A periodic Toda lattice is completely integrable and it is known that the eigenvalues of the following matrix L are first integrals of the system: In the numerical test, we consider d = 3 particles, positioned initially at q 1 = 0, q 2 = 2 and q 3 = 3. The initial momenta are p 1 = 0.5, p 2 = −1.5 and p 3 = 1.
First, we choose a time step t = 10 −2 . The Hamiltonian equation is solved with the classical 4-th order Runge-Kutta scheme (RK4) and the symplectic version (RK4sym) defined by (16) up to t = 5000. The relative error on the Hamiltonian is presented in Fig. 1. As can be seen, the RK4 error oscillates and increases globally linearly. It remains acceptable for t < 5000 since it does not exceed 2.735 · 10 −5 . The RK4sym error also oscillates but is much closer to zero. It is bounded by 4.625 · 10 −7 , that is an order of 10 −2 bellow RK4 error at t = 5000, as can be observed in Fig. 1b. Figure 2 shows that both schemes globally preserve the three eigenvalues of the matrix L.
To analyze the robustness of the schemes, t is now set to 10 −1 . With this time step, the error of the classical Runge-Kutta scheme increases quickly from the first iterations, as can be observed in Fig. 3. It reaches 50% around t = 4.2 · 10 3 . As for it, the RK4sym error oscillates around 2.26 · 10 −3 but does not present any increasing global tendency. Its highest value is about 3.27 · 10 −3 , as can be checked in the figure.
A comparison between RK4 and the symplectic Euler scheme defined in (12) is also given in Fig. 3b. It shows that the error of the Euler scheme oscillates around 0.83 · t and does not exceed 2.42 t. So, for t greater than 820, even the first order symplectic Euler scheme produces an error smaller than the 4-th order non-symplectic Runge-Kutta scheme on the Hamiltonian.
The evolution of the eigenvalues of the matrix L is presented in Fig. 4. It clearly shows that the eigenvalues are not preserved by the classical Runge-Kutta scheme. For example, the computed smallest eigenvalue at t = 5000 is about −3.50 whereas its initial value is −5.24. With the symplectic Runge-Kutta scheme, the error on the smallest eigenvalue oscillates around 4.85 · 10 −3 but does not present an increasing tendency. With the first order symplectic Euler scheme, the oscillations are much more pronounced, as can be observed in Fig. 5, but as with RK4sym, there is no increasing trend. It becomes smaller than the non-symplectic RK4 error when the simulation time increases. It is clear from these experiments that symplectic schemes are particularly stable for large time simulations, where the user wishes a time step as large as possible to reduce the computation cost. In some situation, even a symplectic scheme with a smaller order gives better results over a long time than a classical integrator.

n-body problem
As a second example, consider the system of d bodies subjected to mutual gravitational forces. The evolution of the system is described by the Hamiltonian function In this expression, q k and p k are the position vector and the momentum of the k−th body, m k is its mass and G is the gravitation constant.
For the simulation, we take d = 3. We consider the initial configuration corresponding to the choregraphic figure-eight in [4]. The solution is periodic, with period The simulation is run up to t = 2200T , with a time step t = 0.02T . In these configurations, the classical Runge-Kutta scheme provides a fairly good result but the symplectic scheme is much more accurate regarding the preservation properties. The relative error of RK4 on the Hamiltonian increases linearly and is about 1.87 · 10 −5 at the final time. It can be viewed in Fig. 7a in logarithmic scale. The relative error of RK4sym on H oscillates but never exceeds 9.99 · 10 −8 . The errors on the angular momentum are plotted in Fig. 7b. It shows that the error of RK4 is bounded by 1.4 · 10 −9 whereas the error of RK4sym is remains under the machine precision (around 2 · 10 −12 ).
In a second simulation, the initial position and momentum of the body in the middle in Fig. 6 are changed into their values at t = T /80 in the figure-eight solution. The initial configuration of the bodies at the two ends are kept as in the previous simulation. In this case, the figure-eight is broken. A part of the trajectory of each body and their initial positions are presented in Fig. 6b. The evolution of the error on the Hamiltonian and on the angular momentum is plotted in Fig. 8. As can be noticed, the error on the Hamiltonian increases quickly with the classical RK4. It reaches 50 percent at t 2166T 13702. At These numerical experiments show again that symplectic schemes are more robust than classical ones for long time dynamics simulation. They have a good behaviour regarding the preservation of the Hamiltonian and some other first integrals despite the perturbation introduced in the initial configuration. Similar results have been obtained in a previous work [24] on an harmonic oscillator, on Kepler's problem and on vortex dynamics.
Obviously, not all mechanical systems fit into the Hamiltonian formalism, hence there are other geometric constructions that are worth being considered in the context of structure-preserving integrators.

Dirac integrators
In this section, we give an overview of the so-called Dirac structures and describe a class of mechanical systems where those appear naturally, namely systems with constraints. Originally, Dirac structures appear in the work of Courant [6]. The initial motivation was coming from mechanics. As is known, for mechanical systems one can choose between Lagrangian and Hamiltonian formalisms, both being equivalent in finite dimension. The rough idea behind Dirac structures is to consider both formalisms simultaneously, i.e. working with velocities and momenta, however not forgetting that those are dependent variables. Geometrically, this means that instead of choosing between the tangent and cotangent bundles TM or T * M for the phase space, we consider their direct sum E = TM ⊕ T * M and a subbundle of it, subject to some compatibility conditions. Somehow, the original work did not have direct applications to mechanics, since the geometry of the problem turned out to be rather intricate, and gave rise to a lot of development in higher structures and in theoretical physics. In the last decade, however, it was revived with the introduction of so-called port-Hamiltonian [32] and implicit Lagrangian systems [33,34].

Geometric construction
Not to overload the presentation here with geometric details, let us talk about spaces instead of bundles. To recover the geometric picture, a motivated reader is invited to read the original paper [6] of Courant or the overview of relevant results in [27]. The object of study will be R 2d (in the same notations as in the beginning of the previous section), and some natural construction around it. We are going to view it as R 2d = R d × V * , that is the trivial bundle over R d with a fiber being V * -the dual of some d-dimensional vector space V . Morally, V is the space of velocities v at each point q and V * corresponds to the space of momenta p. In coordinates: In this setting, imposing constraints on the system means defining some restriction on couples (q, v), i.e. not all the points q are permitted, and at each point q, v is not arbitrary, but belongs to a subspace of V . Under some regularity conditions, one can say, that at each velocity v belongs to a set q ⊂ V which is the kernel of a set of linear forms α a .
To transfer this construction from R d × V to R d × V * , one needs to consider double vector bundles [31]. In our simplified setting this means that the space of interest is V = R 4d , where each component has some geometric interpretation. Namely, we consider V as the tangent to R d ×V * . Naturally, R d ×V is embedded in V (recall that V is tangent to R d ). The constraint set is then a subset˜ ⊂ V, and the differential forms α a (q) generate its annihilator 0 that naturally belongs to V * . Note that, since R d ×V * is a symplectic space, it is equipped with a bilinear antisymmetric non-degenerate closed form (this form is the generalisation of the matrix J of "Symplectic integrators" section in non-canonical coordinates). One can then construct a symplectic mapping : V → V * .
We can now define the Dirac structure 1 associated to the system with constraints: To define the dynamics of the system, we introduce two more objects. First, the Lagrangian, which as usual is a mapping L : R d × V → R, induces its differential which is a mapping dL from R d × V to its cotangent. And again by post-composing it with a symplectomorphism of the appropriate double bundles, one constructs the Dirac differential which locally reads DL : Second, the evolution of the system will be described by a so-called partial vector field X, i.e. a mapping where Leg( ) is the image of by the Legendre transform. X should be viewed as a vector field on R d × V * , with the momenta parametrized by the Legendre transform of the velocities compatible with the constraints.
With the above notations, the implicit Lagrangian system is a triple (L, , X), such that (X, DL) ∈ D . In local coordinates, this means: One understands easily the mechanical interpretation of the first three equations above.
The forth one can be rewritten as dp and one recognizes immediately the Lagrange multipliers.

Discretization
It is important to note that the previous section is not just a "fancy" way of recovering the well-known theory: every step of the construction admits a discrete analog. We briefly present the recipe of this discretization and, again, refer the interested reader to [27] for details and examples. The system is characterized by the following continuous data: the Lagrangian function L(q,q) and the set of constraint 1-forms α a , a = 1, . . . , m. The discrete version L d of the Lagrangian at time t n is And the constraints are rewritten as , and α a d is a discrete version of α a . In the Equations above, q n is the value of q and v n is an approximation of the velocity v, both at time t n .
To construct the numerical method out of these data, one applies the following procedure: The variables appearing explicitly are the values of p at the n-th and the (n + 1)-st steps, and v n should be some approximation of the velocity, thus bringing q n to the system. Here, we consider two natural options: • v n := q n+1 − q n t , we label it Dirac-1, and • v n := q n+1 − q n−1 2 t , labelled Dirac-2.
In both cases, we obtain 2d + m equations: d from each of the lines (20) and (21) of the equations above, and m from the constraints (22). At the n-th step the unknowns are q n+1 , p n+1 and λ = (λ 1 , . . . , λ m ), so the system we obtain is complete. It is linear in λ and p n+1 , and when the constraints are holonomic, in q n+1 as well.
It is important to note that, in some sense, this construction generalizes the previous section. If one considers the system without constraints but still applies the procedure, (22) becomes obsolete, and in (21) the right-hand-sides vanish, so one obtains a numerical method for the dynamics of a Lagrangian system governed by L. By a straightforward computation, one checks that for a natural mechanical system with a potential U , i.e. when L = 1 2 mv 2 + U (q), Dirac-1 is symplectic. And it is also meaningful to consider a symplectic version of Dirac-2 (we do not detail it here since we would need to explain what is symplecticity for a multistep method).

Example: chaos for double pendulum
We will apply the Dirac integrators constructed in the previous subsection to a scholar problem of a planar double pendulum in a gravity field: a system of 2 mass points attached to rigid inextensible weightless rods (see Fig. 9). Although, it looks like a classical wellstudied problem, it is a bit challenging for simulations. In the absence of the gravity field, this is a textbook example of an integrable system (energy and angular momentum are conserved, thus the Liouville-Arnold theorem can be applied). With the gravity, the "folkloric knowledge" says that it is chaotic, although as far as we know there is no rigorous proof of the fact. For the integrability, there is a semi-numerical proof of absence of an additional first integral in [26], using the computation of the monodromy group by a method presented in [25]. Anyway, the apparent chaoticity of the system results in its sensitivity to parameters and initial data in the numerical simulation.
From the point of view of the previous subsection, this is a typical example of a system with constraints: the distance 1 from the first mass point to the origin and the distance 2 between the two mass points are fixed. The system admits a parametrization in terms of angles, but we will pretend not to know it, to test the method.
The typical result of simulations is shown in Fig. 10. Dirac-2 and explicit Euler methods are compared. For visualization (but not for computation), we use the angle representation of the double pendulum (see Fig. 9). Both algorithms start with the same initial data, and the same timestep t = 0.0001. They are in good agreement in the beginning as can be seen on the two top-graphics of Fig. 10. But already at time T = 50, the difference is visible (graphics in the middle). And towards T = 100 the difference becomes dramatic: for the Euler method the pendulum is making full turns instead of oscillation. And this is clearly a computation artifact, since decreasing the timestep one gets rid of the discrepancy and recovers the left picture for both methods. Note also that Dirac structure based method preserves the constraints much better than the Euler one: the error is 2.2 · 10 −6 compared to 0.06 respectively.
A similar effect is observed for other methods: trapezium, and even Runge-Kutta, which is of higher order. Moreover, there is another non-negligible convenience of the Dirac structure based methods: the Lagrange multipliers are treated like other dynamical variables, there is no need to resolve "by hand" the equation related to constraints (22).
In many areas of mechanics, systems are often described by an underlying geometric structure. As observed in the two previous sections, making use of these structures leads to more robust numerical schemes. In the last section, we propose an integrator which is suitable to general systems where no geometric structure is exploitable for numerical simulations.

Borel-Laplace integrator
Consider an ordinary differential or a semi-discretized partial differential equation: associated to an initial condition u(t = 0) = u 0 . We look for a time series solution: Generally, the right-hand side of (23) can be expanded into where F n is the n-th Taylor expansion of the function F (u(t), t) at t = 0. Injecting (24) and (25) into Eq. (23) leads to the following recurrence relation This relation enables to compute the terms of the seriesȗ. See also sections and for concrete examples. A Borel summation is then applied to series (24). This summation is essential if the convergence radius of the series is zero and the series is summable [2,20]. If the convergence radius is not zero, the Borel summation enlarges the domain of validity of the series. The Borel sum ofȗ(t) is where B is the Borel transform, P is a prolongation along a semi-line in C linking 0 to infinity (we will take the real positive semi-line), and L is the Laplace transform along this semi-line. The theory of Borel summation can be found, for example, in [2,[20][21][22]. Some other works on BPL, as a time integrator, can be found in [8,23]. In this section, we present very briefly the Borel-Padé-Laplace algorithm, integrated into a numerical scheme. The Borel-Padé-Laplace summation integrator (BPL) consists in the following steps: • Given an initial condition u(t 0 ) = u 0 , compute a truncated series solution via recurrence (26):ȗ N (t) = N n=0 u n t n .
• Compute its Borel transform: • Transform Bȗ N (ξ ) into a rational fraction function via a Padé approximation:

. b N den t N den
The Padé approximation materializes the prolongation in the Borel summation procedure.
• Apply a Laplace transformation (at 1/t) on P(ξ ) to obtain a numerical Borel sum Numerically, the integral is replaced by a Gauss-Laguerre quadrature.
• Take Sȗ N (t) as an approximate solution u(t) of (23) within the integral [t 0 , t 1 ] where the residue of the equation is smaller than a parameter res . • Restart the algorithm with u 0 = u(t 1 ) as initial condition to obtain an approximate solution for larger values of t.
At each iteration, t 1 − t 0 is considered as the (adaptative) time-step of the scheme. The average time-step will be used for comparisons in numerical experiments. Note that at each time, the approximate solution has an analytical representation as a Laplace integral. A continued fraction representation can also be used [9]. An advantage of BPL is that it is totally explicit, in contrast with symplectic integrators in general. Moreover, changing the order of the scheme is as easy as setting N to a different value. Note also that the resummation procedure can be done componentwise, enabling an easy parallelization on multi-core computers. However, no such optimization has been done in the present article.
In the following subsection, a partial analysis of the symplecticity property of BPL is presented.

High-order symplecticity
The numerical flow of BPL can be defined as Currently, no symplecticity result has yet been found on this scheme. Instead, it can be shown that a scheme based on the truncated seriesȗ without the resummation procedure is symplectic at order N , if the equation is symplectic.
The flow of the scheme based on the time seriesȗ is ϕ t,ȗ : u 0 →ȗ(t) = +∞ n=0 t n u n .

Lemma 1 The flow of the scheme based on the time series, applied to the Hamiltonian Eq.
(2) is agreeing that D 0 = 1.
This can be straightforwardly deduced by injecting the time seriesȗ in (2) and identifying the coefficients of each t n . Next, if the series is convergent then, inside the convergence disc,ȗ is the exact solution. In this case,φ t is symplectic. We reformulate this statement in the following theorem.

Theorem 2 If the series is convergent then
Corollary 3 If the series is convergent then, for any n ≥ 1, This corollary is obtained by injecting the series development (28) into (29) and identifying the coefficients of t n for n ≥ 1. For n = 0, we simply have When the series is truncated at order N , the flow ofȗ N is t n u n .
The following theorem shows that the scheme based on the truncated series is symplectic at order N + 1.

Theorem 4 If the series is convergent then
for t ∈ [0, δt] where δt is the convergence radius. Indeed, Using (30) and (31), the theorem follows. Note that in Theorem 4, δt is generally small. In the following subsections, BPL is implemented and tested on a Hamiltonian equation. Next, we present some experiments on non-Hamiltonian equations.
In simulations, the truncation order of the series is set to N = 10 unless otherwise stated. The degree of the numerator and the denominator of the Padé approximant are N num = 4 and N den = 5. A singular value decomposition is used to strengthen the robustness of the Padé calculation [11]. Twenty Gauss-Laguerre roots are used for the quadrature.
The aim of these simulations is not to make an extensive comparison of BPL with classical schemes (this will be done in a forthcoming paper) but only to show the potential of the scheme in predicting long time dynamics.

Periodic Toda lattice
We consider again the periodic Toda lattice from "Periodic Toda lattice" section. The quality parameter res of BPL is choosen such that the mean time step δt is approximately 0.1, and compare the results with that of RK4 and RK4sym (see Figs. 3 and 4). Figure 11a presents the local relative errors on the Hamiltonian. As can be seen, BPL is much more accurate than RK4sym for t ∈ [0, 5000]. The value of the global error, defined as is 3.010 · 10 −4 with BPL and 2.261 · 10 −3 with RK4sym. BPL also preserves the eigenvalues of the matrix L in equation (18) with a fair precision, as seen in Fig. 11b. Table 1 compares the CPU time needed for 5000 seconds of simulation. It shows that RK4 is the fastest but, as already mentioned, it is not accurate enough for t = 0.1. It  CPU and accuracy comparison, with almost the same (mean) time step also shows that BPL, for approximately the same mean time step, is about twice as slow as RK4sym, but is 7.5 times more accurate. In a second test, the different parameters (time step for RK4 and RK4sym and res for BPL) are set such that the global accuracies are comparable. Table 2 shows the (mean) time steps and the CPU errors for E mean H around 2.43·10 −3 . As can be seen, RK4sym needs 28 percent less time than BPL to achieve the same accuracy on H, due to its especially good property towards the preservation of the Hamiltonian. However, BPL is more than 8 times as fast as the classical RK4 scheme.

Duffing equation
In the next numerical experiment, consider the forced Duffing equation which describes nonlinear damped oscillators [16,30]. To illustrate the series decomposition, let us write this equation as a first order one: Thanks to a Taylor development of Eq. (34), the terms of the time seriesȗ andv at t = t 0 can be computed as follows: In the last term, the sum is over n 1 , n 2 , n 3 ∈ {0, . . . , n} such that n 1 + n 2 + n 3 = n. Note that writing Eq. (33) as a first order equation is not mandatory (and increases lightly computation time and memory requirement). One can directly apply a Taylor development on Eq. (33) to compute the terms ofȗ. Note also that for more complicated right-hand-side, automatic differentiation avoids to calculate the Taylor development by hand [3]. We first consider the force-free case with two sets of coefficients for which there is a first integral: • Case 1: a = 2/9, b = 1, r = −1, • Case 2: a = 1, b = 1, r = 0.
In Case 1, the first integral is In Case 2, the equation can be written in a Hamiltonian form, with a Jacobi elliptic sine function as exact solution. The first integral (the Hamiltonian function) is The initial conditions are u(0) = 1,u(0) = 0. In Case 1, the quality criterium res of BPL is set to 10 −4 . The mean time step within the first 20 seconds is approximately 4.7 · 10 −4 . Figure 12 plots the evolution of the relative error on the first integral, compared to the relative error of RK4 with a time step 4 · 10 −4 . As can be seen, the RK4 error is very small in the beginning of the simulation. It remains below 10 −9 at t = 10. But when t is large, the RK4 error becomes big and reaches 50 percent at t = 22.3 s. With BPL, the error oscillates when t is small. The amplitude is of the same order as res . Next, the error stabilizes rapidly around 1.27 · 10 −6 .
In Case 2, the quality criterium res of BPL is choosen such that the mean time step is around 0.106. For RK4, the time step is set to 0.1. The evolution of the relative error on the first integral (37) is plotted in Fig. 13 in logarithmic scale. As can be seen, the error of BPL is much smaller than that of RK4.
To end up with Duffing equation, some phase portraits obtained with BPL are computed. They corresponds to a = −1, b = 1, r = .3, ω = 1.2 and c varying from 0.20 to 0.65. The initial conditions are u(0) = 1 andu(0) = 0. Figure 14 presents the phase trajectories for t ∈ [40, 1000], that is after the transient phase. These plots have been obtained with a rather loose value of res for which the mean time step is around 0.5. But as can be seen, when c = 0.20, 0.28, 0.29, 0.37, and 0.65, the (multiple)-periodicity is very well captured even over a very long time interval. Indeed, the curves are closed. For c = 0.50, the solution is chaotic but is bounded. These results are in agreement with those presented in [16].
In the last subsection, BPL is applied to a semi-discretized partial differential equation. It is compared to some other adaptative schemes. Since the system is big enough, it is worth to give an indication on the CPU simulation time.

Korteveg-de-Vries equation
Consider the Korteweg-de-Vries equation  The time steps are presented in Fig. 16. In mean, the BPL time step is 238 times bigger than that of ETDRK4. This reflects on the CPU time. Indeed, as can be observed on Table 3, BPL is about 950 times faster than ETDRK4 for approximately the same precision. Note that, for this specific problem, the time step with RK4 has the same order as that of ETDRK4, but RK4 is more efficient than ETDRK4 in terms of computation time, since it requires neither numerical matrix (pseudo-)inversion nor exponential. Compared to BPL, RK4 takes 60 times more CPU time to reach one period.
In the next simulation, we analyse the behaviour of the schemes when the size d of the problem is increased. Figure 17a presents the L 2 error for t = T . It shows that the precision of BPL and ETDRK4 remains approximately the same, except when d is very  Figure 17b shows however that BPL requires much less iterations (100 iterations versus 20389 for ETDRK4 when d = 512 to reach one period). The time steps of BPL and ETDRK4 seem to have the same behaviour when d is large enough. Indeed, they tend to be independent of d, as suggested by Fig. 17b. However, the computation time increases In all of the previous simulations, the truncation order N of the time series in BPL was set to 10. In our last test, the effect of N on the quality of BPL is analysed. For this, the size of the problem is fixed to d = 128. Figure 18a shows that the time step increases with N , passing from t mean = 0.0256 for N = 4 to t mean = 0.156 when N = 14. Despite the number of iterations is consequently reduced, the CPU time also increases with N , going from 0.686 to 0.173 s, as can be observed in Fig. 18b. This is caused by the fact that more coefficients of the series and more Padé coefficients have to be computed. As for it, the error fluctuates but globally decreases from 7.51 · 10 −5 to 4.30 · 10 −6 . This fluctuation is not uncommon in series based approximations. It is interesting to note that whereas the error is divided by 17.5, the CPU time is multiplied only by 3.96 between N = 4 and N = 14. In other words, the precision increases faster than the CPU time when the order of the scheme is increased.

Conclusion
In this article, we gave an overview of some time integrators for long-time simulations. Two geometric integrators and a general-purpose time integrator was presented.
Through numerical examples, the ability of symplectic integrator in preserving the Hamiltonian, the angular momentum or eigenvalues was observed. Moreover, it was shown that symplectic integrators are more robust compared to classical schemes when the time step is enlarged (in the example of Toda lattice) or when a perturbation is introduced (three-body problem).
Next, a way of constructing Dirac integrators for constrained system was given. Numerical experiments showed that respecting the Dirac structure at discrete level avoids numerical artifacts. As a consequence, Dirac integrators are able to reproduce the dynamics of the system over a long time.
Finally, we showed that BPL competes with symplectic integrators in predicting Hamiltonian dynamics (Toda lattice and Case 2 of Duffing equation). For more general equations, BPL also preserves with high precision the first integral of the system, as well as the periodicity when the solution is periodic. Lastly, compared to two popular schemes, BPL appears to require less computation time.