On phase change and latent heat models in metal additive manufacturing process simulation

This work proposes an extension of phase change and latent heat models for the simulation of metal powder bed fusion additive manufacturing processes on the macroscale and compares different models with respect to accuracy and numerical efficiency. Specifically, a systematic formulation of phase fraction variables is proposed relying either on temperature- or enthalpy-based interpolation schemes. Moreover, two well-known schemes for the numerical treatment of latent heat, namely the apparent capacity and the so-called heat integration scheme, are critically reviewed and compared with respect to numerical efficiency and overall accuracy. Eventually, a novel variant of the heat integration scheme is proposed that allows to directly control efficiency and accuracy by means of a user-defined tolerance. Depending on the chosen tolerance, it is shown that this novel approach offers increased numerical efficiency for a given level of accuracy or improved accuracy for a given level of numerical efficiency as compared to the apparent capacity and the original heat integration scheme. The investigation and comparison of all considered schemes is based on a series of numerical test cases that are representative for application scenarios in metal powder bed fusion additive manufacturing.


Introduction
Additive manufacturing (AM) is widely considered to be a key technology for future advances in engineering. AM offers highest flexibility in part design while still achieving the mechanical properties required for functional parts [1]. In metal powder bed fusion additive manufacturing (PBFAM) multiple layers of metal powder are successively molten at selected positions, which eventually form the cross-sections of the final part after solidification. Energy is commonly deposited by a laser or electron beam giving rise to the respective names selective laser melting (SLM) and electron beam melting (EBM). These processes come with very challenging thermophysical phenomena on multiple length and time scales [2]. Accordingly, existing modeling approaches can be classified with respect to the resolved length scales: Macroscale approaches commonly aim at determining spatial distributions of physical fields such as temperature, residual stresses or dimensional warping on the scale of the design part. Mesoscale approaches resolve the length scale of individual powder particles on domains smaller than one powder layer to either study the melt pool thermo-fluid dynamics during the melting process [3][4][5][6][7][8] or the cohesive powder flow during the previous powder recoating process [9][10][11]. Last, microscale approaches predict the evolution of the metallurgical microstructure during solidification [12][13][14][15][16]. A broad overview of state-of-the-art modeling approaches on these different length scales can be found in [17]. The present article focuses on the development of a thermal macroscale model for metal PBFAM processes.
Macroscale PBFAM models typically treat the powder phase as a homogenized continuum described via spatially averaged thermal and mechanical properties, without resolving individual powder particles. Also, the complex fluid dynamics within the melt pool is typically not explicitly resolved. Instead, a pure thermo-(solid-)mechanical problem is solved, usually based on a Lagrangian description and a spatial finite element discretization, with specific temperature-and phase-dependent thermal and mechanical constitutive parameters for the (homogenized) powder phase, the melt phase and the solidified phase. On the one hand, from a modeling point of view, such a procedure considerably simplifies the coupling of the different phase domains. On the other hand, this approach seems to be well-justified for certain 2 General thermal model For the purpose of this study it is sufficient to focus on the following transient purely thermal problem described by the heat equation for the temperature T and appropriate boundary conditions. The problem statement in strong form is: in Ω, Temperatures T are prescribed on the boundary part Γ T and heat fluxes on Γ q . The heat flux q is specified by Fourier's law for isotropic material, Material properties, namely (volumetric) heat capacity c and conductivity k, may in general depend on the temperature T but also on the phase r (see Section 2.3 for the definition of phase fractions and the interpolation of material parameters). In this contribution, spatial discretization will be based on the finite element method (see Section 3), i.e. (1) has to be transferred into its weak form via multiplication with a test function δT and integration by parts, viz.
where the boundary conditions have already been inserted. By introducing the trial space V = {T | T ∈ H 1 , T =T on Γ T } as well as the weighting space W = {δT | δT ∈ H 1 , δT = 0 on Γ T }, where H 1 denotes the Sobolev space of functions with square-integrable first derivatives, (3) is equivalent to the strong form (1).

Modeling of the laser heat source
The source termr represents the energy deposited by the laser beam as a volumetric heat source based on the model for radiative and conductive heat transfer in powder beds by Gusarov et al. [20]. In the following, a summary of the main model constituents is given. Let a powder layer be distributed in the xy-plane, where the powder material extends in positive z-direction from the powder layer surface at z = 0 up to the layer thickness L at z = L. A laser beam of nominal power W and size R is applied normal to this plane. The laser beam as well as the local coordinate system move in x-direction with a velocity v. The source term is then given in this local coordinate system relative to the laser beam center byr where r h is the distance in the xy-plane from the laser beam center and β h is the extinction coefficient. The nominal power density Q 0 is radially distributed around the laser beam center as The nominal laser power W has been averaged and reduced to an effective power W e to account for various losses. Thus, (5) describes the spatial distribution in x-and y-direction. The normalized power density q is given in terms of the dimensionless coordinate ξ = β h z as h ) e −2aξ (1 − a) + e 2aξ (a + 1) − e 2a(ξ −λ) (1 − a − ρ h (a + 1)) + e 2a(λ−ξ ) (a + ρ h (a − 1) + 1) (ρ h e −2λ + 3) with hemispherical reflectivity ρ h , constant a = √ 1 − ρ h , optical thickness λ = β h L and the constant Finally, the derivative of (6) with respect to ξ as required in (4) reads Apart from the optical properties, (8) only depends on the z-coordinate z = ξ /β h .

Modeling of phase change and latent heat
So far we have not addressed the (crucial) phase change problem (i.e. melting and solidification), first from powder to molten and eventually from molten to solid phase. Consider a domain containing both solid and liquid phase separated by an interface Γ m , which is defined by the isotherm T = T m . Based on an energy balance at the interface, the following so-called Stefan-Neumann equation has to hold: where n sl is the interface normal vector, h m the (volume-specific) latent heat of melting and the subscripts (·) s and (·) l denote quantities evaluated in the solid or liquid phase, respectively. The absorbed or released heat flux ∆q m is proportional to the velocity v sl of the evolving interface, in general leading to discontinuous heat fluxes across the phase interface. The free boundary condition (9) is especially suitable for sharp interface models in Eulerian description when combined with explicit interface tracking schemes, e.g. via level set functions [37], whose temporal evolution is defined by v sl . In this work, as typical for macroscale PBFAM models, the thermal problem is described in a Lagrangian manner in combination with a diffuse interface model, i.e. it is assumed that phase change takes place across an extended interface volume of finite thickness. Within this volume, the temperature is constrained until the melting enthalpy h m has been absorbed or released: where h 0 represents the enthalpy level at which melting starts. The left part of Fig. 1 (solid line) illustrates this concept. When deriving the weak form (3) in a variational manner, constraint equation (10a) (or alternatively its rate form (10b)) can be enforced via the following Lagrange mutliplier potential where λ represents the Lagrange multiplier enforcing (10a). Its total variation yields The first term in (12) represents the original constraint equation (10a), while the second term yields an additional contribution to the weak form (3): which leads to the following expression for the enthalpy during phase change and eventually to an integral limiting condition for the Lagrange multiplier: Hu and Argyropoulos [45] review various methods to account for the latent heat associated with phase change. In the following two sections, the basics of two methods especially popular in the field of PBFAM modeling are briefly presented. In particular, we propose that these two schemes can be interpreted as different realizations of the constrained weak form (13).
Remark. So far we have considered pure materials, i.e. isothermal phase change at a fixed melting temperature T m . For alloys, phase change typically happens gradually, i.e. the latent heat is absorbed or released within a rather narrow temperature interval between solidus temperature T s and liquidus temperature T l , in the following denoted as mushy phase change.
Remark. Throughout this work, the phase interface is implicitly defined by isotherms, a common choice in the context of PBFAM process simulation. Depending on the type of phase change (isothermal or mushy), one might take the isotherm at melting temperature (supplemented by a proper tolerance) or the isotherms at solidus and liquidus temperature to represent the interface.

Apparent capacity method
One of the simplest approaches to capture the effects of latent heat is the so-called apparent capacity (AC) method. The basic idea is to regularize the constraint (10) such that phase change takes place within a finite temperature interval of width 2d given by T ∈ [T − d; T + d]. Throughout this work, the temperature bounds confining this regularized phase change interval for isothermal phase change will be represented by the same variables T s = T − d and T l = T + d as the solidus and liquidus temperature for mushy phase change. This choice seems reasonable as it will turn out that the algorithmic treatment of both cases is identical. Considering now the weak from (13), the apparent capacity method results from setting where the factor c m (T ) ≥ 0 penalizes non-zero temperature ratesṪ = 0, i.e. violation of constraint (10b). Thus, according to (17) the Lagrange multiplier is not considered as an independent primary variable and (10b) is not satisfied exactly anymore. Inserting (17) into (13) allows to identify a modified capacity, justifying the name apparent capacity method. The choice of c m (T ) is only restricted by the limiting condition (16), which after a change of variable eventually reads According to (19), small values of c m (T ) require a large regularized phase change interval typically resulting in a more good-natured numerical algorithm at the cost of lower solution accuracy and vice versa. The original phase change constraint (10) (solid line) as well as the regularized phase change constraint based on the apparent capacity (18) (dashed line) are illustrated in Fig. 1. In this contribution a smoothed triangular distribution is chosen for c m , as illustrated in the right part of Fig. 1.
As simple as the AC method may be, it suffers from one major drawback: An accurate representation of the phase change constraint, i.e. the choice of a small phase change interval 2d yields large values and steep gradients of the function c m (T ), which are not only challenging for numerical solution schemes (e.g. nonlinear solvers, see Section 3) but also prone to large time integration errors. In this case, the absorbed or released enthalpy during phase change (19), and thus the overall energy balance of the phase change problem, is only captured with low accuracy.

Heat integration method
Another popular and more advanced procedure is what Hu and Argyropoulos [45] call the heat integration (HI) method. It has first been applied to a FE setting by Rolph and Bathe [43] and is still used in more recent contributions [21,44]. Here, we present the basic concept by proposing an alternative interpretation of the heat integration method as an augmented Lagrange constraint enforcement scheme [46]. The full algorithmic details of the method in the spatially and temporally discretized problem setting will be presented in Section 4.
In a first step, the Lagrange multiplier in the weak form (3) is replaced by an augmented Lagrange formulation for the constraint (10a) of the form where > 0 represents a penalty parameter. Comparable to an augmented Lagrange version based on the so-called Uzawa algorithm [46], the new Lagrange multiplierλ is not considered as an independent primary variable but rather as a history variable within an iterative solution procedure of the form λ i = λ i−1 leading to where i represents an iteration counter to be defined in Section 3. In the final algorithm, the enthalpy rate λ i at iteration i has to fulfill the constraint equation (10a) and the enthalpy inequality (16) up to a user-defined tolerance, which will eventually define a stopping criterion for the iterative procedure (see Section 4.2).
Remark. The HI scheme, which has originally been proposed in the time-discrete problem setting [43], can be recovered from (21) by choosing the penalty parameter according to = c(T )/∆t, where ∆t represents the time step size to be introduced in Section 3. Now, to motivate this specific choice and to illustrate the working principle of the HI scheme, consider the (theoretical) case of a material with constant capacity c(T ) = const. and vanishing conductivity k(T ) = 0. Then, the time-discrete version of heat equation example, the specific choice = c/∆t allows to find the correct temperature solution that is consistent with (10a) already in the first iteration i = 1: Even though general scenarios with c(T ) = const. and k(T ) = 0 will typically require a higher number of iterations, the choice = c/∆t still seems reasonable and is well-established also in this case.
Remark. The original definition of the Uzawa algorithm defines an additional fixed-point iteration wrapped around the iteration loop of the nonlinear solver (e.g. a Newton-Raphson scheme as discussed in Section 3) to perform the iterative update (21). In contrast, the HI algorithm presented in Section 4 will perform these updates directly during the iterations of the nonlinear solver without any additional "outer" iteration loop.

Modeling of temperature-and phase-dependent parameters
As common in macroscale PBFAM models, all three phases are modeled by varying material parameters of a solid material law. On the one hand, this procedure considerably simplifies the numerical schemes for coupling the different phases. On the other hand, this approach seems also to be justified for the thermo-mechanical problem since the mechanical forces transferred from powder and molten phase (with vanishing stiffness) onto the solid phase are negligible in good approximation for many questions. It is also important to note that the change from powder phase to molten phase is irreversible. In the following, we will consider the different types of phase changes in more detail. Fig. 2 gives an illustration of the possible states of a material point. The current temperature T is on the ordinate, while the abscissa shows the highest temperature ever reached, T max . The different areas correspond to different mixtures of powder, melt and solid. By definition there cannot be any possible state for T > T max and this area is blanked out. If T max < T s , all material must still be in powder form (p). If T max > T l , there is no more powder, all material is consolidated and thus must be solid (s) or molten (m). The exact constitution of the two then depends on the current temperature. The same reasoning applies to the current temperature. If T > T l , all material must be molten. If T < T s , material is a mixture of powder and solid, where the exact constitution is depending on T max . Perhaps the most interesting scenario is obtained when T s < T < T max < T l . In this case some powder is still left and the consolidated phase consists of melt and solid. The arrows in Fig. 2 indicate the possible evolution of phases. Due to the definition of T max the only way to increase its value is an irreversible movement along the black diagonal line in Fig. 2. For all other locations in the diagram a reversible horizontal movement is possible. With these considerations a temperature-based interpolation procedure for any material parameter can be derived.

Temperature-based interpolation
First, the focus is on the transition from (non-powder) solid to melt. The liquid fraction g is introduced as If only solid and molten phase were present, any material parameter f could be interpolated from the solid and melt values f s and f m . The history-dependent material behavior is captured by the fraction of consolidated material r c defined as The time argument is explicitly stated to emphasize the history-dependence of r c (t). The consolidated fraction is initialized with a proper start value r c (0) (zero/one for locations initially covered with powder/consolidated material). For example, in a region that has initially been covered with powder, definition (23) equals the all-time maximum of the liquid fraction g at this location which, according to (22), carries the same information as the maximum temperature T max . In a region that has initially been covered with solid material, definition (23) equals one for all times since solid material can never transform back to powder. The monotonously increasing fraction of consolidated material r c together with the liquid fraction g allow to define volume fractions for powder, melt and solid phases according to: Their physical motivation is as follows: The powder fraction r p given in (24) is by definition the complement of the consolidated fraction r c . The molten fraction r m in (25) is independent of the history and is always determined by (22). The solid fraction r s defined in (26) is the part of the consolidated fraction which is not molten. Note that definitions (24), (25) and (26) satisfy partition of unity and are thus suitable for interpolation.
Any material parameter f can now be interpolated from the single phase values f p , f m , f s weighted by the corresponding fractions For the special choice f p = f s , a two phase interpolation without history-dependent behavior would be recovered. The dependence on temperature is explicitly stated in (27) since this requires a consistent linearization to achieve robust convergence of the nonlinear solver. Fig. 3 shows the evolution of an exemplary material parameter f over the temperature history for the chosen liquid fraction definition (22). In the left diagram powder melts completely and consequently the parameter f takes on the value of a solid after cooling down below T s . The right diagram shows partial melting: After cooling down below T s the parameter is a weighted average of the powder and solid value based on the consolidated fraction which is now smaller than 1. three-dimensional representation which makes the history-dependence on T max more explicit and shows that the parameter interpolation is also continuous over the history. This is important in the modeling of PBFAM processes since it ensures a continuous transition of material parameters between regions of molten or solidified material and regions that are still covered with unmolten powder. Note that the definition of the liquid fraction (22) determines the exact shape of the interpolating curve (27). The kinks at T s and T l could also be smoothed out if this seems necessary for an improved numerical behavior (e.g. robust convergence of the nonlinear solver, see Section 3). (24), (25) and (26) imply that in a material mixture containing solid and powder the solid fraction always melts first, i.e. when increasing the temperature at a scenario T s < T < T max < T l the powder fraction does not increase before the previous maximum temperature T max is exceeded. With more history variables, also a different behavior could be realized (e.g. powder should melt first) but this is not deemed necessary in the context of macroscale PBFAM.

Enthalpy-based interpolation
In the case of isothermal phase change (if not regularized by an AC scheme), the definition of the liquid fraction based on temperature as in (22) is not directly applicable. A temperature-based parameter interpolation would still be possible based on a definition of solidus and liquidus temperature T s = T m − d and T l = T m + d as numerical regularization values as done in the AC scheme. However, a more consistent alternative approach utilizes the accumulated melt enthalpy h(t) in (15) to define Essentially, the liquid fraction is defined as the ratio of already absorbed melt enthalpy to the latent heat required for phase change. All other parts of the interpolation (27) remain unchanged. This liquid fraction will prove useful in combination with the HI method as further discussed in Section 4.

General numerical solution procedure
The weak form of the heat equation (3) is discretized in space by the finite element method (FEM) following a Bubnov-Galerkin approach: where x is the spatial position, and T j (t) as well as δT j are the nodal temperatures and temperature variations. For time discretization a one-step theta time integration scheme is employed, which, for the model equationφ = f (φ, t), is defined as Here, ∆t := t n+1 −t n is the time step size and the subscript (.) n indicates a quantity that is evaluated at the discrete time step t n . Together, the spatial and temporal discretization result in a fully discrete, nonlinear system of equations R(T n+1 ) = 0, where R is the global residual vector and T n+1 the global vector of nodal temperatures at t n+1 . The equations are nonlinear due to the temperature-dependence of the heat capacity and thermal conductivity and due to the underlying phase change subproblem represented by (the non-smooth) constraint equation (10a). The system of nonlinear equations is linearized and solved iteratively with a Newton-Raphson scheme, which yields the following iterative update procedure The following two convergence criteria ||R(T i+1 n+1 )|| 2 < ε R and ||∆T i+1 n+1 || 2 < ε T are considered, i.e., for convergence both the norm of the residual and the iterative solution vector increment have to fall below given tolerances. All implementations and simulations presented in the next sections have been performed in the in-house research code BACI [47], a parallel, multi-physics finite element framework.

Discretization and algorithm of heat integration scheme
Requirements and objective: In Section 5, it will turn out that the HI method typically describes the phase change problem more accurately (e.g. in terms of constraint (10a)) as the AC scheme. Unfortunately, the HI method in its original form is known to typically result in a slow Newton-Raphson convergence [21,45], which can be traced back to residual manipulations in subsequent iteration steps i on the basis of (21), which are not accompanied by an associated consistent linearization contribution. In order to satisfy the constraint equation (10a), the original HI scheme typically leads to such manipulations, i.e. to changes of the Lagrange multiplier λ i in subsequent iterations, throughout the entire Newton-Raphson loop, which slows down convergence considerably. While the basics of this original HI scheme are presented in Section 4.1, we propose a novel realization of the HI scheme in Section 4.2. This variant allows to control the accuracy of constraint enforcement in (10a) via a user-defined tolerance. For practically relevant choices of this tolerance, λ i in (21) typically takes on a constant value after only a few iterations, thus allowing for quadratic convergence of the Newton-Raphson scheme afterwards.
The basic idea of the HI method has been introduced in an abstract manner in Section 2.2.2 for the case of isothermal phase change. Essentially, the method performs the update (21) for the Lagrange multiplier λ i occurring in the discrete version of the weak form (13) for each Newton-Raphson iteration i at which constraint equation (10a) is violated. It is important to note that the HI method in the spatially discretized setting enforces this constraint at element nodes (and not at integration points). For this purpose, the nodal volume V k is defined for each node k as Taking advantage of the partition of unity k N k = 1, (32) implies that the sum of all the nodal volumes V k associated with an element indeed equals the volume occupied by this element. Considering the Lagrange multiplier term in (13), the integral over Ω can then be approximated by a nodal evaluation of the integrand λ times the nodal volume. Thus, the contribution to the residual entry R k yields: Since λ k is a volume-specific enthalpy rate,Ḣ k can be interpreted as an absolute enthalpy rate. Similarly, the latent heat of melting associated with node k is Employing (21) to express the Lagrange multiplier in (33) at t n+1 yields: where the penalty parameter was chosen as = c/∆t (see Section 2.2.2) and hence The iterative update rule (35) together with (36) is the discrete counter part to (21). Employing a backward Euler scheme for time integration ofḢ i k,n+1 in (35) and assuming (without loss of generality) h 0 = 0 in (15), the nodal enthalpy H i k,n+1 is where H k,n represents the nodal enthalpy at the converged configuration of the last time step t n . Thus, the iterative update rule (37) for the nodal enthalpy H i k,n+1 has the same form as the update rule (35) for the nodal enthalpy rateḢ i k,n+1 but with the different initial values H 0 k,n+1 = H k,n andḢ 0 k,n+1 = 0 required in the first iteration i = 1 of a time step. Also the enthalpy limiting condition (16) during phase change can be transferred to the discrete problem setting, which reads (for h 0 = 0): Essentially, the working principle of the heat integration scheme is to add residuum contributions in (33) defined by the iterative update scheme (35) as long as the discrete limiting condition (38) is satisfied. The detailed algorithm of the original HI scheme as well as the required adaptions for the proposed tolerance-based HI scheme will be presented in the next two sections.
Remark. As noted by the authors of the original work [43] and in accordance with (33), the HI method requires the consistent use of the nodal lumped capacity. Thus, when using this algorithm, the capacity matrix must enter the residual and Jacobian in the Newton-Raphson algorithm (31) in lumped form to obtain a robust scheme. Note also that no linearization contribution associated with the residual term in (33) is considered in the context of the Newton-Raphson method.

Original heat integration scheme
We will first introduce our understanding of the original HI method [43]. The specific notation was inspired by the formulation in [44]. So far, the considerations have been restricted to isothermal phase change but they can be extended to the treatment of mushy phase change. The only difference is the fact that during melting the temperature is not constrained to the constant value T m but rather to a gradually evolving intermediate temperature T between [T s , T l ]. Thus, in the following, we will present the more general algorithm for mushy phase change and point out the relevant differences to the isothermal scenario. Each node k stores the nodal enthalpy H i−1 k,n+1 as a history-variable. At the beginning of the simulation, it is initialized to zero for solid material. At the first Newton-Raphson iteration of a time step it is initialized with the converged value from the last time step, i.e. H 0 k,n+1 = H k,n . Now, after each Newton-Raphson iteration i in time step n + 1 the following calculations are performed: which means it is not undergoing phase change. In this case the increment ∆H i k,n+1 in (35) is set equal to zero. Here, T k,n denotes the converged temperature value of node k at the last time step n.
2. Else, for each node k which is undergoing phase change compute the increment Here, T is an intermediate temperature given by calculated from the amount of latent heat already absorbed (released) during melting (freezing). The modified capacity is computed as where c s and c l are the values of heat capacity at T s and T l , respectively. In case of isothermal phase change, the intermediate temperature simplifies to the melting temperature, i.e. T = T m = T s = T l , and the modified capacity simplifies to the averaged capacity at the melting point, i.e. c = 1 2 (c s + c l ) such that (40) becomes identical to (36).

Limit each increment ∆H i
k,n+1 such that the following condition is fulfilled: If condition (43) is violated, the increment ∆H i k,n+1 is limited such that the respective bound in (43) is exactly met (e.g. ∆H i k,n+1 = H m,k − H i−1 k,n+1 if the right bound in (43) was exceeded). Afterwards update the nodal enthalpy: 4. For each node k where |∆H i k,n+1 | > 0 reset the temperature 5. Calculate the updated enthalpy rate according to (35), and add the residual contribution (33). The start valueḢ 0 k,n+1 = 0 in the first iteration i = 1 of each time step is the equivalent of λ 0 in (21).
It is emphasized that limiting condition (43) allows to use the same algorithm for both melting and solidification as discussed in the remark at the end of this section.

Tolerance-based heat integration scheme
As stated at the beginning of Section 4, the original HI method is known to result in a slow Newton-Raphson convergence due to the residual manipulations (46), which are not accompanied by an associated consistent linearization contribution and which would typically occur throughout the entire Newton-Raphson loop if no additional, tolerance-controlled abort criterion is utilized. To improve the algorithm, we propose to introduce a tolerance ε tol to control the accuracy of the phase change representation. Hence, we stop adding increments ∆H i k,n+1 in the algorithm above when these are small compared to the latent heat of melting, i.e., where ε tol < 1 is a relative tolerance describing the relative amount of latent heat which is not absorbed (released) after melting (solidification) is complete. Inserting (34) and (40) into (47) yields an alternative for step 1 in the algorithm above: The new criterion provides a way to stop the HI algorithm when constraint (10a) is satisfied with a certain accuracy. At first glance, the new criterion (48) seems to significantly change the outcome of the algorithm since (40) will be evaluated for nodes that are far away from a phase change. However, it can easily be verified that the signed limiting condition (43) will automatically skip all nodes that are not meant to undergo phase change on the basis of their current temperature value (see also the discussion in the remark below).
Remark. Consider a node that is heated up and the material is undergoing a melting process. Initially, the node is in solid state, i.e. H i−1 k,n+1 = 0, with a temperature below the solidus temperature, i.e. T i k,n+1 < T s . The calculated increment (40) would be negative. Since there is no phase change yet, no increment should be added. Indeed, limiting according to (43) will not allow a negative increment with a zero history and the increment is set to zero. When the temperature rises above the solidus temperature T s , however, the increments become positive and will be considered according to (43). Positive increments are added to the the nodal latent heat H k until it reaches the allowed maximum H m,k . Then the material is fully molten.
Next, consider the inverse process, i.e. cooling of a node with material initially in the molten state. If material is molten, then H i−1 k,n+1 = H m,k . Looking at the limiting condition (43) shows that only negative increments are allowed, when material is molten. As expected, negative increments (40) are obtained, and the solidification process is initiated, as soon as temperature drops below the liquidus temperature T l . The negative increments are added to H k until it reaches a value of zero. Then the material has returned to the solid state. The phase change as it was just described is fully reversible.
Remark. Now that the details of the HI scheme have been introduced, the enthalpy-based liquid fraction (28) used for parameter interpolation in Section 2.3 can be further specified. For node k it reads In the numerical examples of the following section, this liquid fraction based on latent heat will be employed in combination with the isothermal HI scheme. All latent heat schemes employing a finite phase change interval [T s ; T l ], will use the temperature-based parameter interpolation with liquid fraction (22).

Solidification front in a 1D slab
To validate the implementation, first a series of numerical experiments are conducted on a one dimensional domain for which an analytic solution is available [45]. This example has already been used to show the validity of methods for capturing latent heat [40]. A pseudo one-dimensional slab (material properties of ice / water) with length L = 1 m is subject to a fixed temperatureT = 253 K on its left edge at x = 0. The initial temperature in the whole slab is T 0 = 283 K. The left part of Fig. 5 illustrates this scenario. The interface separating frozen and liquid water will slowly travel from left to right. Material parameters for water are given in Table 1, they are taken directly from [40]. The problem is discretized with 25, 50 or 100 linear finite elements in space and three different fixed step sizes ∆t ∈ {200 s, 400 s, 800 s} in time.
Total simulation time is t f = 72 · 10 3 s. At this point in time the temperature on the right edge is still at the initial level and the analytic solution (which is calculated on a semi-infinite domain) remains valid. The introduced HI method will be used in four variants by distinguishing a) isothermal and mushy phase change as well as b) the original criterion (39) and the novel tolerance-based criterion (48). In this example, phase change of water is isothermal and thus the isothermal HI methods with either original or tolerance-based criterion can be applied directly. For the AC method an artificial phase change interval is chosen with T s = 270 K and T l = 276 K, i.e., d = 3 K. The same interval is used to apply the original and tolerance-based mushy HI methods. Additionally, for tolerance-based HI the tolerance is chosen as ε tol = 0.001, i.e., up to 0.1% of latent heat will be neglected.
All approaches yield results that agree very well with the analytic solution. Fig. 6 shows the solutions obtained with AC and original isothermal HI method on the finest mesh as an example. The maximum errors in the numerical solutions provided by the different methods are shown for the three investigated meshes and time step sizes in Fig. 7. The maximum errors lie below 4% for the coarsest mesh and around 2% for the finer meshes, which is deemed accurate enough for the intended use case. Within the considered scope, the time step size seems to have only minor effect on the accuracy. All versions of HI schemes produce errors that are slightly higher compared to the error from the AC scheme. However, for the large time step sizes ∆t = 400 s and ∆t = 800 s the AC method does not always converge.
The real difference between the methods comes to light when numerical efficiency is investigated in terms of Newton iterations needed per time step. We choose to analyze Newton iterations as a measure for the efficiency of the proposed methods instead of computational time, which typically leads to a stronger dependency on the specific code implementation. Still, the resulting computation time, which is usually of practical interest, scales directly with the number of Newton iterations. Fig. 8 shows a strong dependence of the original HI method [43] on the spatial discretization with heavily increased iterations per time step in case of finer spatial resolution. On top of that a larger time step leads to a further increase in iterations. Both isothermal and mushy version of the original method suffer from this effect. Hodge et al. [21] mention that small time steps had to be used because of the original HI method. However, our proposed tolerance-based method does not only require significantly less iterations per step   of different methods for latent heat depending on spatial and temporal discretization. Missing data points for the AC scheme indicate that no convergence was achieved for these parameter choices. in every scenario but is also less sensitive to spatial and temporal discretization. This seems beneficial when moving to simulation of PBFAM processes on a part-scale.

Melting volume in a 1D slab
We investigate the same variants on a slightly modified second example (Fig. 5, right). The Dirichlet condition on the left boundary is dropped and all faces are assumed to be insulating. Instead a spatially varying source termr = 20, 000(1 − x) W /m 3 is applied to the whole slab of water which is initially frozen at T 0 = 263 K. Material parameters for solid and liquid water are again given in Tab. 1. In contrast to the previous example, melting will not take place at a single node representing the phase interface. Instead a whole volume can be in phase transition (i.e. melting). The same spatial and temporal discretizations from before are used, total simulation time is t f = 20 · 10 3 s. Again, the AC method and the four variants of HI are applied with the settings from above. No analytic solution is available for this scenario. Fig. 9 shows the obtained temperature profiles along the one dimensional slab for the fine discretization with 100 elements and a step size of 200 s. Obviously, AC and mushy HI methods will not keep temperatures fixed at the melting temperature of 273 K during phase change. When one is concerned about exact representation of isothermal phase change, only the original and tolerance-based version of isothermal HI are accurate enough, although some oscillation around the melting temperature is observed. Looking at the final temperatures on the left edge, when melting is already finished, reveals that the latent heat of melting still is captured with good accuracy by all methods, and the predicted temperatures agree well. Given the large temperature range prevalent in the targeted application (PBFAM simulation on part-scale), a highly accurate representation of temperature profiles around the melting point is not of highest practical importance.
A short look on the accuracy of isothermal HI schemes is taken in this paragraph. Solutions with the HI scheme can become quite inaccurate around the melting temperature especially when larger time steps are used. Fig. 10 shows such solutions of the original scheme at t = 0.5t f for different step sizes and compares it to our proposed tolerance-based method. Both methods cannot exactly enforce isothermal phase change which would be characterized by a horizontal plateau region at T m . The original method's criterion (39) for determining nodes undergoing phase change proves to be ill-suited for this scenario. Larger time steps lead to larger undershoots in temperature. The fluctuations around melting temperature obtained with the proposed tolerance-based HI scheme on the contrary are independent of step size and only controlled by the tolerance ε tol . The temperature profiles resulting from three different tolerances (0.01, 0.001 and 0.0001) can be seen in Fig. 11. Next we turn to the efficiency of all methods and examine the average number of Newton iterations per time step shown in Fig. 12. Again, the original variants of the HI scheme need the most Newton iterations and are especially sensitive to temporal and spatial discretization. They are no longer considered in the remaining examinations. Fig. 13 only compares iterations of the AC and tolerance-based HI methods. The iteration count increases with increasing time step size for all three methods but stays more or less constant over all spatial discretizations. In the rightmost graph showing the largest time step, the AC for the first time requires more iterations than the proposed tolerance-based HI methods.
This melting volume example reveals an already mentioned problem of AC methods, namely the possibility to neglect much of latent heat by stepping over or passing through the phase change interval too fast. The AC method is used with three widths d ∈ {1 K, 2 K, 3 K} to compute artificial solidus and liquidus temperatures T s = T m − d and T l = T m + d. The final temperature profiles are graphed in Fig. 14 for three time step sizes and the finest mesh with 100 finite elements. Obviously, the profiles differ in the respective phase change intervals which would not be the main concern in PBFAM. Instead focus lies on the maximum temperature predicted on the left edge. For the smallest time step ∆t = 200 s all widths reach almost the same maximum temperature. Increasing the step size leads to larger discrepancies in the maximum temperature. A smaller width d correlates with higher maximum temperatures which in    Figure 15: Geometry and moving laser heat source for single track scan example (all dimensions given in mm) [21].
turn implies that not all latent heat has been absorbed.
Summary of preliminary simulations: Taking into account all experience gathered from the two numerical examples, the authors recommend the use of the proposed tolerance-based HI method or an AC method. We found that the new criterion (48) to determine nodes that undergo phase change is superior to the one originally introduced by [43]. This is due to two reasons. First, the accuracy is user-controllable by setting a respective tolerance. Second, the new stopping criterion (48) typically leads to a significant reduction of nonlinear Newton iterations except for scenarios with very strict tolerances, which are, however, not expected to be necessary in the targeted application PBFAM.
Naturally, it is hard to predict a-priori which method will lead to less Newton iterations since this depends on the specific problem, tolerances and the (artificial) phase change interval width. Therefore, in the following, an actual example in the context of PBFAM will be investigated.

Single track scan
The following example simulates the scanning of a single track in a PBFAM process and was introduced in [20] and has also been simulated elsewhere, see [21]. A schematic sketch of the setup is shown in Fig. 15. A volumetric heat source described by Eq. (4) with effective power W = 30 W and size R = 0.06 mm is applied to the powder domain.
The geometry consists of a cuboid of 0.6 × 0.2 × 0.2 mm 3 . The top layer of 0.05 mm in z-direction is in powder form and rests on top of a solid substrate domain. The material is a 316L type steel with parameters summarized in Tab. 2. The whole domain is initialized at a fixed temperature T 0 = 303 K. All surfaces are insulating, only x = 0.6 mm is subject to an essential boundary conditionT = T 0 = 303 K.
The laser beam center moves from an initial position at x = −0.06 mm (one laser beam radius outside the domain) with a scanning speed of v = 120 mm /s in x-direction along the symmetry plane y = 0. The  0.07 0.07 0.07 0.07 powder layer is discretized by a regular hexahedral mesh with element size h 0 = 0.0025 mm, i.e. n z ele = 20 elements across the powder layer height. In the substrate domain, a mesh with double element height in z-direction is applied. Moreover, an adaptive time stepping scheme is applied that halves the time step size if no convergence is achieved by the employed Newton-Raphson scheme (within a prescribed maximal number of iterations), and that doubles the step size again after four convergent time steps on the smaller step size level. As initial step size a value of ∆t (0) = 1 µs has been chosen, which has been verified to yield a sufficiently small time discretization error.
In prior simulations of this example, latent heat effects have been taken into account via an enthalpy method [20] and an isothermal HI method [21]. Here, we will use an AC method and subsequently the newly proposed tolerance-based isothermal HI scheme to simulate the process. An artificial melting interval is introduced for the AC method. As a baseline we chose T s = 1600 K and T l = 1800 K, i.e., d = 100 K. Isothermal HI is applied with a tolerance of ε tol = 0.001. The isothermal HI scheme is used in combination with enthalpy-based parameter interpolation, while the AC method is used with temperaturebased interpolation. In a first step, qualitative characteristics of the solution shall be discussed. After a short period of time the melt pool shape reaches a steady-state. Its geometry can be visualized by the isotherm T = T m . Fig. 16 compares the results obtained with the AC and tolerance-based HI method to the results reported in [20] and [21]. Both the AC and HI solution show good agreement with the reference. The melt pool dimensions and peak temperatures are compared quantitatively to the reference solutions in Tab. 3. All compared quantities show good agreement.
Since no more quantitative data is provided by the reference papers, we compare AC and HI method with each other. In the preliminary examples in Section 5.1 and 5.2 a strong dependency on spatial discretization was recognized. Three additional, coarser spatial discretizations with elements of size 2h 0 , 4h 0 and 20 3 h 0 (which results in n z ele ∈ {10, 5, 3} elements over the powder layer height) are introduced to investigate this phenomenon for the single track scan. The accuracy of both methods can be judged by looking at characteristic temperature profiles in the steady-state. All results presented in the following are shown for (approximately) t = 4.6 µs, which is the same point in time for which the melt pool shape has been illustrated in Fig. 16. First, Fig. 17 shows the surface temperature profiles for all discretizations plotted along the laser path (i.e. y = 0) in the vicinity of the melting front. This front is characterized by high temperature gradients and rates. With increasing element size, larger nonphysical oscillations in the temperature profile are observed for the AC scheme. These oscillations can be traced back to the limitation of the employed first-order finite elements in representing strong gradients and material nonlinearities, here mainly caused by the extreme gradients of the thermal conductivity across the phase interface. Employing finite elements based on higher order shape functions can remedy this numerical issue [48,49].
No such oscillations occur with the HI scheme, which enforces the temperature in the phase transition region to lie within a temperature interval (implicitly) prescribed through the tolerance ε tol . Although the shape functions used with the HI scheme are still first-order, the reset of temperature to a consistent melt temperature as described in (45) seems to prohibit temperature oscillations in the phase interface region as observed for AC, even though the HI scheme performs phase change and parameter interpolation within a considerably smaller temperature interval. It is important to note that so far the oscillations have not been observed to cause stability issues (e.g. a significant energy increase in the discrete system) and they remain small compared to the overall temperature range.
A second phase change happens along the laser path (i.e. y = 0.0 mm) when material cools down again at the tail of the melt pool. Fig. 18 gives a detailed view of the temperature profile in this region. The HI method produces a kink in the temperature profile at T m , which is to be expected for a phase interface. The AC method produces a mushy phase change region and no kink is observed. However, further away both temperature profiles are in good agreement again.
Another aspect to investigate is the resulting temperature profile transverse to the laser scanning direction. Change from melt to powder can be investigated with a cut in y-direction at x = 0.4 mm as shown in Fig. 19. Again oscillations appear in the AC solution for coarser meshes but not in the HI solution. A similar picture emerges for a cut in y-direction through solid and powder, e.g. at x = 0.2 mm as illustrated in Fig. 20. In both transverse cuts temperature profiles differ for AC and HI method in proximity to T m and agree well in some distance to the phase interface. As suspected, in PBFAM applications the chosen method for latent heat only has a very local influence on the resulting temperature field, but does not significantly affect the global temperature characteristics from a rather macroscopic point of view. Another aspect of importance for PBFAM process simulations is numerical efficiency, here assessed in terms of nonlinear solver performance. Fig. 21 depicts the total number of Newton iterations accumulated over the whole simulation time for the different spatial discretizations. The results of the preliminary numerical examples in the previous sections seem to transfer to a larger example: The HI method shows a strong dependency on the spatial resolution. The difference in the number of Newton iterations between AC and HI method is less pronounced in the practically relevant range of discretizations (e.g. 3 elements across the powder layer thickness).
In a next step, it shall be investigated how the accuracy and numerical efficiency of the two considered phase change schemes can be influenced by the respective numerical parameters d (phase change interval of the AC scheme) and ε tol (tolerance of the HI scheme). In a first step, the phase change interval for the AC method is varied for the two coarsest spatial discretizations to see how it affects the temperature oscillation. It has to be noted that this also changes the temperature interval for temperature-based parameter interpolation. The intervals are given by T s = T m − d and T l = T m + d, where d ∈ {100, 250, 500}[K]. Moreover, two additional versions of HI with relaxed tolerances of ε tol = 0.01 and ε tol = 0.1 are simulated. The resulting surface temperatures are plotted in Fig. 22, 23 and 24 in the regions that showed oscillations in the previous plots. While the increased phase change interval for the AC decreases the amplitude of these oscillations by a certain amount, the overall accuracy of the temperature profiles decreases as well. Thus, it has to be concluded that for a given spatial discretization the width d of the phase change interval is not a suitable parameter to control the accuracy of AC schemes. The solutions obtained with  different tolerances for the HI method show no oscillation. The solution from HI with tolerance ε tol = 0.01 is indistinguishable from the one with stricter tolerance ε tol = 0.001. The solution from HI with tolerance ε tol = 0.1 deviates slightly from the ones with stricter tolerances in the solid-powder transition region, see Fig. 24. However, it still seems to be considerably more accurate than the solution from AC with d = 500 K. Finally, the initial time step size ∆t 0 for the three considered HI and AC variants is increased up to 64 times of the original value. The total number of Newton iterations that is now required is depicted in Fig. 25. It is emphasized that the time step halving scheme is still employed. It was optimized individually for each method to yield low iteration counts. Moreover, it has been checked that the time discretization error is still sufficiently small, i.e. there is no visible difference in the results for base time step sizes up to 16 times ∆t 0 for all variants. Higher step sizes lead to visible albeit small deviations in the temperature profiles, which are smaller, however, than the deviations resulting from the different HI and AC variants. According to Fig. 25, an increased phase change interval d in AC allows for larger step sizes and thus less iterations. Especially on the coarsest mesh, HI with a high tolerance (ε tol = 0.1) yields a comparable number of accumulated Newton iterations, but at a considerably increased accuracy, as compared to the AC method with the (unphysically) large phase change interval d = 500 K. All in all, for a given spatial discretization, the proposed HI scheme allows to directly control numerical efficiency and accuracy by means of a user-defined tolerance. Within the considered scope of numerical examples and practically relevant spatial and temporal discretizations, this property made the novel tolerance-based HI scheme preferable as compared to the original HI scheme and the investigated AC method.

Conclusion
In this work, an extension of phase change and latent heat models for the simulation of metal powder bed fusion additive manufacturing processes on the macroscale has been proposed and different models have been compared with respect to accuracy and numerical efficiency. In this context, a systematic formulation of phase fraction variables has been proposed relying either on temperature-or enthalpybased interpolation schemes. Moreover, latent heat has been considered either by means of an apparent capacity (AC) or heat integration (HI) method. For the latter, a novel phase change criterion has been proposed, which combines superior accuracy and numerical efficiency (in terms of an improved nonlinear solver performance allowing for larger time step sizes and fewer iterations per time step) as compared to the original HI scheme. Compared to the AC approach, the numerical efficiency of the proposed tolerance-based HI scheme is comparable while offering an increased level of accuracy. Numerical results from the literature have been reproduced, which shows the validity of the proposed scheme. In summary, both the AC and the proposed tolerance-based HI scheme perform well when considering the accuracy requirements as well as practically relevant spatial and temporal discretization resolutions for PBFAM process simulation. Specifically, global temperature characteristics, such as the peak temperature, can be accurately captured with both methods. Still, the authors believe that the new tolerance-based HI method is advantageous over AC schemes due to the user-controllable tolerance, which allows to directly control numerical efficiency and accuracy of the scheme, and which can directly be interpreted as the error in latent heat made during a phase change process. For part-scale PBFAM models thermo-mechanical interaction is one of the primary interests. Structural parameters such as Young's modulus or the thermal expansion coefficient may depend upon the temperature history in a similar fashion as proposed in the current work for the thermal parameters. Future research work will focus on the question of how the proposed methods for latent heat and parameter interpolation will behave in the thermo-mechanically coupled scenario.