Skip to main content

Enhanced prediction of thermomechanical systems using machine learning, PCA, and finite element simulation

Abstract

This research paper presents a comprehensive methodology for analyzing wet clutches, focusing on their intricate thermomechanical behavior. The study combines advanced encoding techniques, such as Principal Component Analysis (PCA), with metamodeling, to efficiently predict pressure and temperature distributions on friction surfaces. By parametrically varying input parameters and utilizing Finite Element Method (FEM) simulations, we generate a dataset comprising 200 simulations, divided into training and testing sets. Our findings indicate that PCA encoding effectively reduces data dimensionality while preserving essential information. Notably, the study reveals that only a few PCA components are required for accurate encoding: two components for temperature distribution and pressure, and three components for heat flux density. We compare various metamodeling techniques, including Linear Regression, Decision Trees, Random Forest, Support Vector Regression, Gaussian Processes, and Neural Networks. The results underscore the varying performance of these techniques, with Random Forest excelling in mechanical metamodeling and Neural Networks demonstrating superiority in thermal metamodeling.

Introduction

Motivation

Wet clutches are frequently employed in the automotive and industrial sectors to transmit torque between rotating shafts. However, the thermomechanical behavior of wet clutches is intricate and influenced by various factors such as friction, wear, temperature, pressure, and fluid flow. Finite element simulation is a potent tool for modeling the performance of wet clutches, yet it demands significant computational effort and time.

An alternative approach is metamodeling, which employs statistical models to approximate the behavior of complex systems based on a set of input–output data pairs. Metamodeling is successfully applied to various engineering problems, including structural optimization, fluid dynamics, and heat transfer. The objective of this study is to explore the potential of metamodels in predicting the thermomechanical behavior of wet clutches and optimizing their design.

The findings of this research can have practical implications for the design and analysis of wet clutches, as well as for the development of more efficient and sustainable energy systems. The utilization of metamodels to forecast the behavior of wet clutches saves time and resources, thereby contributing to the enhanced efficiency of clutches and, consequently, propulsion systems.

Related work

Thermo-mechanical behavior and damage mechanisms

Since wet clutches often play a safety–critical role, a profound understanding of their damage mechanisms is of paramount importance. Damage to clutch components can be caused by mechanical loads, thermal overload, corrosion, erosion, and wear. Essentially, damage mechanisms can be categorized into two groups: long-term damage and spontaneous damage [1].

Long-term damage in wet clutches can result from continuous usage and the associated wear and tear of friction materials [2]. The number of shifting operations before damage or failure can sometimes reach several tens of thousands of shifts [3].

On the other hand, spontaneous damage in wet clutches can be caused by sudden load peaks or extreme operating conditions [1]. Since this phenomenon does not accumulate over time and occurs unexpectedly, it is particularly critical and safety-relevant, as a single engagement can jeopardize the safety of the entire system [4]. Typically, when using sintered friction linings, the damage mechanism of sinter transfer occurs. Sinter transfer represents a form of fretting and is caused by thermal overloading [5]. This leads to material transfer between the friction partners, resulting in an increase in the coefficient of friction. Depending on the extent of this increase, different classes of damage can be distinguished [2]. In the case of organic friction linings, on the other hand, the phenomenon of hot spots occurs [4, 5]. The formation of hot spots is also attributed to an elevated temperature at the friction surface [5].

The formation of hot spots can be explained through the theory of thermoelastic instability (TEI). The frictional heat generated at the contact surfaces leads to a temperature increase in the friction plates, ultimately resulting in their thermal deformation. This thermal expansion of the material causes a change in the pressure distribution. Since the heat flow resulting from friction depends on the pressure distribution, thermal deformation also leads to a change in heat flow. Consequently, interrelationships and dependencies emerge between frictional heat and thermal deformation, forming a complex thermoelastic system. Beyond a certain sliding speed, the thermoelastic system behaves in an unstable manner, leading to very high local pressures and temperatures [6,7,8,9].

Finite Element Simulations of clutches

Finite Element Simulation models are crucial for clutch analysis and design. Historically, researchers like Kennedy and Ling [10] and Zagrodzki [11, 12] have employed these models to understand temperature distribution and thermal stresses in clutches. Further studies by Tirovic and Day [13], Zhao et al. [14], Hwang and Wu [15], and Abdullah et al. [16] expand on this, investigating factors like pressure distribution, friction materials, and heat flow.

Belhocine and Abdullah [17] integrate Computational Fluid Dynamics (CFD) with Finite Element models, calculating heat transfer coefficients and iteratively analyzing temperature and thermal stresses. Moghanlou and Googarchin [18] examine brake fatigue using a transient coupled thermomechanical Finite Element analysis. Wang and Zhang [19] explore wet clutches using a three-dimensional Finite Element model, incorporating the theory of thermoelastic instability.

Schneider et al. [20] develop and validate through experiments a parametric two-dimensional Finite Element model to study wet clutches during transient operations. Their model accounts for temperature and pressure distribution.

Surrogate modeling of FEM-simulations

In the field of structural mechanics, Hoffer et al. [21] conduct a study comparing various metamodeling methods, including both classical machine learning and deep learning, in three application cases involving plate deformation, beam bending, and block compression with specific input variables. Vurtur Badarinath et al. [22] explore the use of machine learning algorithms for directly estimating stress distribution in structures from real-time measurements, with artificial Neural Networks providing more accurate results. Nie et al. [23] utilize Convolutional Neural Networks (CNN) to predict stress fields in two-dimensional linear elastic cantilever structures under external static loads, achieving lower prediction errors. Haghighat et al. [24] introduced a novel approach by incorporating momentum the balance and constitutive relations into Physics Informed Neural Networks (PINN) [25], enhancing its accuracy in modeling linear elasticity and extending its capabilities to nonlinear problems such as von Mises elastoplasticity. However, challenges arise in training PINN due to its multi-objective optimization nature and difficulties in handling problems with discontinuous solutions, as highlighted by the authors. Jeong et al. [26]proposed a novel approach termed piecewise PINN, demonstrating superior accuracy compared to conventional PINN for non-smooth benchmark problems. Their study also highlights the advantages of piecewise PINN in terms of computational efficiency and performance in both interpolation and extrapolation tasks when compared to deep neural network-based surrogate models trained on labeled data from finite element simulations. These findings suggest that deep learning models offer promise in structural design and topology optimization.

In the context of metal forming, D'Addona and Antonelli [27] examine hot forging optimization, considering uncertain factors. They use a Neural Network model as a replacement for finite element simulation, with promising results. Chan et al. [28] develop an integrated methodology based on Finite Element Method (FEM) and Artificial Neural Networks (ANN) to approximate design parameter functions and evaluate design performance, identifying optimal designs. Lorente et al. [29] combine FEM and ANN to optimize product designs in metal forming, achieving good estimates and potential real-time applications. Martínez-Martínez et al. [30] develop a data-driven method to simulate breast tissue deformation in real-time for medical interventions, such as biopsies and radiation therapy planning.

For manufacturing process simulations, Mozaffar et al. [31] develop a recurrent Neural Network structure for accurately predicting thermal histories in Directed Energy Deposition (DED) processes. Zobeiry and Humfeld [32] use physics-informed Neural Networks to solve heat transfer problems in manufacturing and engineering applications, reducing computational time compared to trial-and-error finite element simulations. Kumar et al. [33] introduce a data-driven approach to reduce computation time in predicting temperature fields in Powder Bed Fusion (PBF) processes. Abio et al. [34] explore metamodels in the process simulation of press-hardening steel sheets, significantly reducing computation time without compromising prediction accuracy. These studies showcase the potential of metamodeling in various engineering applications, offering faster and more efficient simulations.

In a related study, Schneider et al. [35] delved into the critical analysis of multi-plate clutches, emphasizing their pivotal role in safety–critical applications. The study addressed the challenges posed by spontaneous damage, particularly under high loads and temperatures, which can compromise the entire system's functionality. To address the computational complexity of Finite Element Analysis (FEA) in predicting temperatures, the study explored the application of machine learning (ML) methods, including polynomial regression, decision tree, support vector regressor, Gaussian process, and neural networks, as surrogate models. The evaluation focused on predicting maximum clutch temperature during a slip cycle under varying axial force, speed, and lining thickness. Results demonstrated the efficacy of ML approaches, with Gaussian process and backpropagation neural network emerging as the most promising methods, meeting the requirement for real-time predictions during operation.

Research objectives

This research paper is driven by several key objectives, aiming to advance our understanding of the thermomechanical behavior of wet clutches and to develop efficient predictive tools. The primary research objectives are as follows:

  • To develop and assess effective data encoding techniques, specifically Principal Component Analysis (PCA), to transform high-dimensional simulation data into more compact, informative representations.

  • To determine the optimal number of PCA components required for accurate encoding of temperature distribution, pressure, and heat flux density, which are crucial factors in clutch behavior analysis.

  • To investigate and compare the performance of various metamodeling algorithms, including Linear Regression, Decision Trees, Random Forest, Support Vector Regression, Gaussian Processes, and Neural Networks, in modeling the mechanical and thermal aspects of wet clutches.

  • To develop robust metamodels capable of efficiently predicting pressure and temperature distributions on friction surfaces, facilitating faster and more accurate thermomechanical simulations.

  • To provide a valuable tool for engineers and researchers to analyze wet clutches, supporting the design and optimization of these components.

By addressing these research objectives, this study endeavors to enhance our ability to predict and analyze the thermomechanical behavior of wet multi-plate clutches, offering valuable insights and practical applications in engineering and design. The overall procedure is illustrated in Fig. 1.

Fig. 1
figure 1

Graphical representation of the general procedure

Background

Thermomechanical simulation of multi-plate clutches

The thermal behavior of wet clutches has been discussed in previous work by Schneider et al. [20]. The heat flow introduced into the clutch system originates from the transformation of the kinematic energy into thermal energy at the friction interfaces. The heat flow varies as a function of space and depends on both the radial distance from the axis of rotation and the local pressure. The resulting local heat flux \(\dot{q}\) for a given radial distance \(r\) from the axis of rotation is given by:

$$\dot{q}(r)=\mu \cdot {\omega }_{rel}\cdot r\cdot p(r)$$
(1)

where \(\mu \) is the friction value, \({\omega }_{rel}\) is the relative rotational speed of the two friction interfaces, and \(p\) is the local contact pressure.

The applied heat flux dissipates both into the steel plate and the friction lining. Under the assumption that the heat flux absorbed by the lubricating film is negligible, the following thermal boundary condition can be used to describe the contact at the friction surfaces:

$$\dot{q}={\lambda }_{s}\cdot \frac{\partial {T}_{s}}{\partial z} - {\lambda }_{f}\cdot \frac{\partial {T}_{f}}{\partial z}$$
(2)

where \({\lambda }_{s}\) and \({\lambda }_{f}\) are the thermal conductivities of the steel plate and the friction lining, and \(\frac{\partial {T}_{s}}{\partial z}\) and \(\frac{\partial {T}_{f}}{\partial z}\) are the temperature gradient in radial direction of the steel plate and the friction lining. Furthermore, the temperature of the two components must be the same at the contact interface, so the following must apply:

$${T}_{s} = {T}_{f}$$
(3)

The thermal boundary condition for the contacts between the friction lining and the carrier plate is given by:

$$0={\lambda }_{\text{c}}\cdot \frac{\partial {T}_{\text{c}}}{\partial z} - {\lambda }_{f}\cdot \frac{\partial {T}_{f}}{\partial z}$$
(4)
$${T}_{\text{c}} = {T}_{f}$$
(5)

Within the individual components of the clutch, the thermal behavior can be described in cylindrical coordinates by the following heat conduction equation:

$$\frac{{\rho }_{i}\cdot {c}_{i}}{{\lambda }_{i}}\cdot \frac{\partial {T}_{i}}{\partial t}=\frac{{\partial }^{2}{T}_{i}}{\partial {r}^{2}}+\frac{1}{r}\cdot \frac{\partial {T}_{i}}{\partial r}+\frac{{\partial }^{2}{T}_{i}}{\partial {z}^{2}}$$
(6)

where \({\rho }_{i}\), \({c}_{i}\) and \({\lambda }_{i}\) are the density, the specific heat capacity, and the heat conductivity of each single component. Furthermore, the components dissipate heat in the axial direction via the inner and outer carriers. For more detailed information, please refer to Schneider et al. [20].

In order to compute the generated heat flux in Eq. 1, it is necessary to consider the mechanical behavior of the clutch in order to compute the pressure distribution in the friction surfaces. The total strain can be divided into a mechanical part \({\varepsilon }_{E}\) and a thermal part \({\varepsilon }_{T}\), which results from the thermal expansion of the material:

$$\varepsilon ={\varepsilon }_{\text{E}}+{\varepsilon }_{\text{T}}$$
(7)

with

$${\varepsilon }_{E}=\frac{\Delta L}{{L}_{0}}$$
(8)
$${\varepsilon }_{T}=\alpha \cdot \Delta T$$
(9)

where \(\Delta L\) is the change of length, \({L}_{0}\) is the initial length, \(\Delta T\) is the temperature difference, and \(\alpha \) is the coefficient of thermal expansion.

Based on the computed strains, the stresses \(\sigma \) can be computed with the following equation:

$$\sigma =D\cdot ({\varepsilon }_{E}+{\varepsilon }_{T})$$
(10)

Principal component analysis

Principal Component Analysis (PCA) is a widely used technique in data analysis and machine learning, particularly in the context of dimensionality reduction [36]. It addresses the challenge of dealing with high-dimensional datasets by transforming the data into a new coordinate system, where the dimensions (or features) are linearly uncorrelated, called principal components.

The fundamental idea behind PCA is to identify the directions in which the data varies the most and represent the data using these directions, or principal components, while discarding the least important directions. These principal components are ordered by the amount of variance they capture, with the first principal component capturing the maximum variance in the data, the second principal component capturing the second maximum variance, and so on [37].

By selecting a subset of the principal components that capture most of the variance in the data, PCA allows for dimensionality reduction while preserving as much information as possible [38]. This reduction in dimensionality not only simplifies the dataset but also aids in visualization, interpretation, and computational efficiency in subsequent analysis tasks.

PCA achieves dimensionality reduction by performing a linear transformation of the original dataset onto a lower-dimensional subspace spanned by the principal components. This transformation is accomplished through the computation of the eigenvectors and eigenvalues of the covariance matrix of the original dataset. The eigenvectors represent the directions of maximum variance, while the corresponding eigenvalues quantify the amount of variance along each eigenvector [37].

In summary, PCA offers a powerful tool for dimensionality reduction by identifying the most important patterns in high-dimensional data and representing them in a lower-dimensional space. It finds applications in various fields, including image and signal processing, pattern recognition, and data compression, where the ability to extract essential features from complex datasets is crucial for further analysis and decision-making. For further details, please refer to [36, 37].

Methodology

Data generation and preprocessing

As a basis for developing the surrogate model, the Finite Element Method (FEM) simulation model developed by Schneider et al. [20] is employed. The model represents a two-dimensional axisymmetric model of a wet clutch with 10 friction interfaces. The overall clutch system consists of 6 steel plates, 5 carrier plates, 10 friction pads, 1 pressure plate, and 1 reaction plate. The geometry of the described components is depicted in Fig. 2.

Fig. 2
figure 2

Geometric dimensions and mechanical boundary conditions [20]

The model is parametrically developed in ANSYS APDL, allowing for variations in individual geometry, material properties, and loading parameters. The structure of the simulation model comprises two distinct parts. The first part encompasses the mechanical aspects of the simulation, accounting for pressures and strains due to internal loads within the components. Factors such as axial force and temperature distribution are considered. The second part addresses the thermal aspects of the simulation. Heat flows generated at the friction interfaces are determined based on the pressure distribution calculated in the first part. These heat flows subsequently serve as loads for the thermal simulation. A transient thermal simulation is conducted to obtain the temperature distribution across the clutch. Upon completion of both simulation phases, an update of the clutch's operating conditions is performed, including updates to the pressure and temperature distributions. This sequence represents a single time step and is iterated for the specified number of time steps, using the updated operating conditions as the initial state. A comprehensive depiction of the entire process flow is presented in Fig. 3.

Fig. 3
figure 3

Flowchart for the simulation process [20]

The data generation process is facilitated using the Latin Hypercube Sampling method [39]. This sampling technique enables efficient and uniform distribution of data points across the entire parameter space. The limits of axial force and rotational speed are defined according to the specifications in Table 1. Multiple combinations are generated by combining different axial force and rotational speed values to cover a wide range of operating conditions. Each data point thus represents a unique combination of axial force and rotational speed. Figure 4 illustrates the applied load history.

Table 1 Boundaries of the varied input parameters
Fig. 4
figure 4

Exemplary load pattern [20]

For dataset creation, 200 simulations are conducted. This dataset is then divided into training and testing sets. The testing set encompasses data from 25 simulations, while the remaining data points are allocated to the training set. The aim of this partitioning is to evaluate the predictive quality of trained models for unseen cases.

Encoding

In order to utilize the simulation data for further analysis, the output values must be encoded in a suitable format that can be effectively processed by machine learning algorithms. As described in the preceding section, the simulation output consists of a value per node or per element, which, in the case of the current model, amounts to a minimum of 12,000 nodes and over 100 elements per load step. Due to the fact that values that are spatially close often exhibit correlations, the dataset contains a significant amount of redundant information. Working with such high-dimensional data is computationally intensive and can lead to poor model performance due to the curse of dimensionality [40].

To address these challenges, encoding techniques are employed to transform the raw simulation data into a more compact and informative representation. This process typically involves mapping the output values onto a lower-dimensional feature space, where key patterns and variations in the data can be more easily identified and captured.

In the present study, encoding temperature and pressure distributions is crucial for developing an accurate and efficient surrogate model for the thermomechanical behavior of wet clutches. The encoded features are used as input for the machine learning algorithms employed in creating the surrogate model, allowing the model to capture the intricate relationships between the input features and the target output variables. Without appropriate encoding, the high-dimensional and intricate nature of the simulation data would hinder precise modeling and prediction of the system's behavior.

Through this dimensionality reduction and encoding process, a concise representation of the temperature distribution within the clutch system is achieved. This reduces memory requirements and enables faster processing. These condensed representations of temperature distributions then serve as input for further analyses and metamodeling approaches, aimed at investigating intricate connections between temperature distributions and other properties of the clutch system.

The process of dimensional reduction and encoding is employed to transform the temperature distribution data from the entire clutch system, originally consisting of 11,963 FEM nodes, into a 256 × 256 matrix. This transformation allows us to represent the complex data with a significantly reduced number of values while preserving essential information. This process commences by extracting the temperature distribution within the individual components of the clutch system (steel plates, carrier plates, pads, etc.). To ensure uniform representation, the temperature distributions for all components are scaled to a predetermined format (256 × 256 matrix). This step is exemplified in Fig. 5.

Fig. 5
figure 5

Extraction of the temperature distribution of the individual components and transformation into a 256 × 256 matrix

Subsequently, each temperature distribution is transformed into a one-dimensional vector, followed by the application of Principal Component Analysis (PCA).

Principal Component Analysis (PCA) is a widely employed technique in dimensionality reduction, commonly applied in various research domains [41]. The number of PCA components is a hyperparameter that requires further investigation. The objective is to effectively represent the temperature distribution within a component using a small number of values (e.g., fewer than 5). This process is illustrated in Fig. 6.

Fig. 6
figure 6

Flattening of the temperature distribution of a single component, represented as a 256 × 256 matrix, into a 3 × 1 vector using principal component analysis (PCA)

If an encoding with n values is performed for each component, then the complete temperature distribution of the clutch system can be represented using 23 * n components. When using a small number of PCA components (e.g., 3 components), the number of values required to represent the entire temperature distribution can be reduced by two orders of magnitude, from tens of thousands to fewer than a hundred values.

Considering that each clutch system consists of 23 components, the temperature distribution of each clutch yields 23 temperature distributions for the individual components. From the 175 FEM simulations used, each comprising 28 time steps, a total of 112,700 data points are available.

Prior to performing Principal Component Analysis, various preprocessing steps are applied. Initially, a further train-validation split is executed, dividing the training dataset described in "Data Generation and Preprocessing" section into a training dataset (70%) and a separate validation dataset (30%). The training dataset is used to perform PCA, while the validation dataset is independently utilized to evaluate the performance of PCA. Moreover, standardization is implemented by scaling the data to achieve an average of 0 and a standard deviation of 1. This step is crucial to ensure that all features are on the same scale and that no individual features dominate.

To validate the quality of the encoding, a reconstruction assessment is conducted, utilizing the reconstruction error, where the encoded data is transformed back into the original space, as a measure of the accuracy of the encoding. In this evaluation, the original temperature distribution is reconstructed using the reduced dimensionality and compared to the actual temperature distribution. This is achieved by applying the inverse transformation of the dimensionality reduction technique, followed by reversing the normalization process. The mean absolute error (MAE) and when possible the mean absolute percentage error (MAPE) is employed point-wise as a measure of the reconstruction error.

In this research paper, we have opted to set a cumulative explained variance threshold of 99.9% in our principal component analysis (PCA), despite the first component already explaining over 98% of the variance in the dataset. The substantial dominance of the first component (shown in "Encoding" section), explaining over 98% of the variance, suggests the presence of a dominant underlying structure or pattern in the data, potentially indicating inherent correlations or trends. However, by setting a higher cumulative explained variance threshold, such as 99.9%, we aim to capture even finer variations and nuances present in the dataset, beyond what is accounted for by the dominant first component alone. This decision reflects our commitment to maximizing the fidelity of the reduced-dimensional representation while accounting for potential minor variations that might be obscured by the dominance of the first component. Moreover, achieving a 99.9% cumulative explained variance is anticipated to yield a substantially reduced reconstruction error, indicating a more accurate representation of the original data and facilitating robust analyses and insights into its underlying structure.

A high accuracy in reconstructing the temperature pattern indicates that PCA encoding is an effective representation method, enabling the compression of temperature data while preserving essential information. This lays the foundation for deploying the metamodel to predict the thermal properties of the clutch system.

Surrogate modeling

The mechanical metamodel is built upon a methodology similar to that of the mechanical component of the FEM simulation. The input variables consist of the temperature distribution from the previous time step and the currently applied axial force, while the contact pressure at the friction interfaces is generated as the metamodel's output. However, unlike in the FEM analysis, the encoded variables (temperature distribution and contact pressure) are utilized as inputs and outputs of the metamodel to facilitate efficient and rapid prediction of the contact pressure at the friction interfaces. The fundamental concept of the mechanical metamodel is illustrated in Fig. 7.

Fig. 7
figure 7

Inputs and outputs of the mechanical metamodel

The thermal metamodel, following a parallel approach to the mechanical metamodel, is based on a methodology similar to the thermal segment of the FEM simulation. It employs the encoded heat flux density as an input, instead of the axial force, and produces the encoded temperature distribution for the subsequent load step as its output. Analogous to the mechanical metamodel, the encoded variables (heat flux density and temperature distribution) are employed as inputs and outputs of the thermal metamodel, enabling efficient and swift forecasting of the updated temperature distribution. The central concept of the thermal metamodel is depicted in Fig. 8.

Fig. 8
figure 8

Inputs and outputs of the thermal metamodel

This study explores five distinct machine learning methodologies to formulate a surrogate model, each elucidated below with reference to Murphy [42] unless specified otherwise.

Polynomial Regression (PR): PR, a subset of linear regression, employs polynomial functions for basis function expansion. The model utilizes higher-order polynomials to capture non-linear relationships.

Decision Tree (DT): Decision trees divide the input space into distinct regions using the CART algorithm, forming a tree-like structure.

Support Vector Regression (SVR): SVR, a parametric model employing kernels, predicts outputs based on a subset of the training data. The model's construction involves solving a constrained optimization problem to strike a balance between model flatness and tolerance for deviations larger than ϵ.

Gaussian Process (GP): GP, a non-parametric method, infers distributions over functions, especially advantageous when data is noise-free.

Backpropagation Neural Network (BPNN): Neural Networks, inspired by biological neurons, consist of interconnected units. The output h of a neuron is a linear combination of inputs, subjected to a nonlinear activation function. The network, with architecture defined by W (weight matrix), b (bias vector), and Ï• (activation function), is trained using backpropagation to address specific problems.

The implementation of the metamodels is carried out using suitable program libraries and software tools that support machine learning and regression-based techniques. For the implementation of Linear Regression, Decision Trees, Random Forest, Support Vector Regression, and Gaussian Process, the Python library Scikit-Learn [43] is utilized. The development of neural networks is facilitated using the Python library Tensorflow [44], developed by Google. For each model, the hyperparameters specified in Table 2 are examined and optimized.

Table 2 Parameterraum der Hyperparameter

For the validation of the metamodels, the existing data points of the training set, consisting of a total of 175 data points, are divided into a separate training and validation set (70%/30%). This step enables an objective evaluation of the metamodel performance and also serves for hyperparameter optimization. The validation set is used to determine the optimal hyperparameters.

Thermomechanical simulation

The thermomechanical simulation is an integrated process that combines the mechanical metamodel and the thermal metamodel. This process is iterated for each time step, analogous to the FEM model outlined in "Data Generation and Preprocessing" section, with the updated results serving as input for the subsequent time step. This approach establishes a consistent coupling between temperature and mechanical load to model the thermomechanical effects within the clutch system. The entire simulation process is depicted in Fig. 9.

Fig. 9
figure 9

Flowchart for the simulation process with metamodels

Initially, the initial temperature distribution is encoded using PCA to obtain an encoded temperature distribution. This encoded temperature distribution, along with the axial force of the current load step, is passed to the mechanical metamodel. The mechanical metamodel then produces the contact pressure at individual friction interfaces as encoded values. Subsequently, the encoded contact pressure is reconstructed using the stored PCA model, yielding the pressure for individual elements on the friction surface. With the reconstructed pressure, current rotational speed, and mean radius of the elements, the generated heat flux density can be calculated (see Eq. 1). This heat flux density is then encoded using PCA analysis.

The thermal metamodel employs the computed encoded heat flux density and the encoded temperature distribution to generate the updated encoded temperature distribution. This updated encoded temperature distribution serves as the starting point for the mechanical simulation in the subsequent time step.

Results

The results section is structured to provide a comprehensive analysis of the encoding and metamodeling processes, as well as the subsequent thermomechanical simulations employing the developed metamodels. In "Encoding" section, we delve into the encoding phase, subdivided into "Temperature" and "Pressure" sections, focusing on the encoding of temperature and pressure, respectively. Subsequently, "Metamodeling" section, addresses the metamodeling phase, which comprises the development and evaluation of mechanical and thermal metamodels. Following the metamodeling phase, "Thermomechanical simulation using metamodels" section integrates these metamodels into thermomechanical simulations, examining their performance and shedding light on the intricacies of predicting temperature and pressure distributions in a simulated environment.

Encoding

Temperature

The findings regarding the encoding of temperature distribution are depicted in Fig. 10, which showcases both the Mean Absolute Percentage Error (MAPE) and the cumulative explained variance concerning the number of Principal Component Analysis (PCA) components. This dual analysis offers complementary insights into the effectiveness of dimensionality reduction. As shown in Fig. 10, the cumulative explained variance increases with the number of PCA components, indicating the proportion of variance in the original data captured by the reduced-dimensional representation. Concurrently, MAPE values demonstrate that an increase in the number of PCA components leads to an enhancement in reconstruction accuracy, both during the training and validation phases. The minimal MAPE values are attained with 5 components, although the most pronounced reduction in reconstruction error is observed at 2 PCA components.

Fig. 10
figure 10

Relationship between the number of PCA components and the reconstruction error (MAPE) for temperature and the dependence of the cumulative explained variance on the number of components

Furthermore, the MAPE for the maximum temperature within the temperature distribution is examined as a function of the number of PCA components. The selection of maximum temperature stems from its pivotal role in thermomechanical simulation. The dependence on component count is depicted in Fig. 11.

Fig. 11
figure 11

Relationship between the number of PCA components and the reconstruction error (MAPE) for maximum temperature

Based on the overall MAPE (Fig. 10), the optimal component count would be two, as the relative error is already below 1%. MAPE values, while providing insight into reconstruction accuracy, may not fully capture the trade-off between dimensionality reduction and accuracy. Therefore, the cumulative explained variance serves as a valuable complementary metric, offering a broader perspective on the overall efficacy of the PCA encoding technique. Considering the cumulative explained variance, the threshold of 99.9% is attained with 3 components. This implies that 3 components capture a substantial proportion of the variance in the temperature distribution, aligning with the objective of dimensionality reduction, while still preserving small variations in the data.

The fact that the first principal component already explains over 97% of the variance in the temperature distribution underscores the presence of a dominant underlying structure or pattern within the data. This dominance suggests that a substantial portion of the variability in temperature across different components of the wet clutch system can be captured by a single component. While this may initially imply that additional components offer diminishing returns in terms of capturing variance, it's essential to consider the broader context. Despite the dominance of the first component, utilizing additional components enables the model to capture finer variations and nuances that may not be fully captured by the dominant component alone. Therefore, while the first component explains a significant portion of the variance, the inclusion of additional components allows for a more nuanced and comprehensive representation of the temperature distribution, ultimately enhancing the model's predictive capabilities.

As a result, three PCA components are determined to be the optimal number for this analysis. To elucidate the outcomes, the original temperature distributions of the components are juxtaposed with the PCA-reconstructed temperature distributions. Three illustrative examples are chosen for this purpose, as showcased in Fig. 12. Minimal discrepancies are discernible in all three instances. This implies that the reconstructed temperature distributions, utilizing merely 3 components, provide a accurate approximation of the original distributions. The results thus affirm the efficacy of PCA encoding in diminishing the dimensionality of temperature data, without compromising significant information.

Fig. 12
figure 12

Original and reconstructed temperature distributions using three PCA components

Pressure

The examination of the results concerning the encoding of pressure is conducted through analysis of the Mean Absolute Percentage Error (MAPE) of reconstruction and the cumulative explained variance. Figure 13 illustrates the MAPE and the cumulative explained variance as a function of the number of Principal Component Analysis (PCA) components.

Fig. 13
figure 13

Dependence of the reconstruction error (MAPE) for pressing and the cumulative explained variance on the number of PCA components

Remarkably, the cumulative explained variance associated with each PCA component reveals that even with just one component, over 98.7% of the variance in the pressure data is captured. This dominance of the first component underscores the presence of a strong underlying pattern within the pressure distribution. However, the inclusion of a second component further enhances the model's ability to capture finer variations, resulting in a significant reduction in reconstruction error to under 1%. Additionally, it's noteworthy that the achieved relative error is consistently below 1% with two components and remains below 0.5% with three components. The reduction from three to four and from four to five components shows only marginal improvement.

Moreover, the stability of the MAPE when transitioning from two to three components suggests that the additional explanatory power gained by including a third component may be limited. This observation aligns with the cumulative explained variance, which indicates that over 99.9% of the variance is explained by just three components. Therefore, employing only two PCA components appears to be sufficient to achieve accurate compression reconstruction while effectively reducing dimensionality.

Heat flux

The outcomes of the heat flux density analysis are depicted in Fig. 14. The reconstruction error (MAE) and the cumulative explained variance are presented as a function of the number of components. Considering the use of MAE instead of MAPE, this decision arises from the nature of the data, where true values include zeros, rendering the MAPE metric not well-defined.

Fig. 14
figure 14

Dependence of the reconstruction error (MAE) for the heat flux density on the number of PCA components

Examining the cumulative explained variance associated with each PCA component reveals that, with only one component, over 99.79% of the variance in the heat flux density data is captured. This underscores the dominant influence of the first component in representing the underlying patterns in the data. However, the addition of a second and third component further contributes to capturing finer variations in the heat flux density distribution, leading to notable improvements in reconstruction accuracy.

Examining the cumulative explained variance associated with each PCA component reveals that, with only one component, over 99.79% of the variance in the heat flux density data is captured. This underscores the dominant influence of the first component in representing the underlying patterns in the data. However, the addition of a second and third component further contributes to capturing finer variations in the heat flux density distribution, leading to notable improvements in reconstruction accuracy. Initially, with only one or two components, the error remains relatively high, indicative of incomplete reconstruction of heat flux density. However, a substantial improvement is observed upon employing three components, where the error is halved compared to previous configurations, suggesting that the addition of a third component enhances the reconstruction accuracy significantly. Notably, the error is halved again with the inclusion of a fourth component, indicating diminishing returns as the number of components increases. This suggests that while increasing the number of components enhances accuracy, the benefit diminishes.

The preceding sections have demonstrated that encoding temperature distribution using Principal Component Analysis (PCA) is an effective method to transform complex and extensive data into more compact representations. Reconstruction accuracy improves with an increasing number of PCA components, with the reconstruction error achieved with five components.

For both temperature distribution and heat flux density encoding, it is observed that the use of three PCA components suffices to achieve good reconstruction and capture at least 99.9% of the variance in the data. Further increasing the component count yields only minor improvements in reconstruction accuracy. A relative error below 1% for the temperature is already achieved with three components, which can be considered acceptable given the magnitude of compression. Regarding the encoding of the pressure, the use of two components leads to satisfactory results, with MAPE values of under 1%. Further increasing the number of components results in diminishing returns.

Metamodeling

Mechanical metamodel

The results of the mechanical metamodel are compared based on the Mean Absolute Error (MAE) of the encoded pressure on the friction surfaces, as shown in Table 3.

Table 3 MAE for training and test set for the individual mechanical metamodels

Linear Regression achieves an MAE of 0.3399 for training and 0.3138 for validation. However, these values indicate that the model's performance may not meet the desired level of accuracy. The small difference between training and validation values does not necessarily suggest generalization ability, as the model's overall predictive capability appears to be limited. Further investigation and refinement may be necessary to improve the performance of the Linear Regression model. The Random Forest model achieves a very low MAE of 0.0194 for training but shows a slightly higher inaccuracy of 0.0515 for validation, indicating a possible overfitting.

The Support Vector Regression (SVR) model achieves an MAE of 0.2821 for training and 0.2183 for validation, demonstrating good performance and a smaller deviation between training and validation values. The Gaussian Process model achieves an MAE of 0.1794 for training and 0.2004 for validation, indicating a good fit to the data with low deviation between training and validation values. The Neural Network achieves an MAE of 0.4031 for training and 0.3784 for validation. Despite the high validation results, the small difference between training and validation values suggests some robustness.

Overall, the results indicate that the Random Forest, despite overfitting, and the Gaussian Process perform the best, while the Neural Network yields the poorest results. The results show that the Random Forest algorithm performs the best among the tested models, possibly due to its ability to capture complex nonlinear relationships while handling the available data effectively. By combining multiple decision trees, the Random Forest can explain high variance and avoid overfitting, leading to accurate predictions. The Gaussian Process model and Support Vector Regression (SVR) also deliver good results. However, both models have the drawback of long training times. In comparison, Linear Regression yields high error values compared to other models. This is because Linear Regression assumes a simple linear relationship between input parameters and clutch temperature, potentially not capturing the complex nonlinear relationships and dependencies in the data adequately. The Neural Network shows the poorest performance among the tested models, possibly because it is either too complex for the available data or not adequately trained. Further optimization of the network architecture and hyperparameters could lead to improved results.

Thermal metamodel

The thermal metamodel is trained and validated using the same algorithms as the mechanical metamodel. The results of these models are presented in Table 4. Examining the results, it becomes evident that the Neural Network exhibits the lowest Mean Absolute Error (MAE) in both training and validation, making it the best-performing model among those considered. The Random Forest algorithm also demonstrates good predictive performance with a low MAE in training, although it is slightly higher in validation. The other models, such as Linear Regression, Support Vector Regression (SVR), and Gaussian Process, also yield acceptable results, albeit with somewhat higher MAE values. Overall, the thermal metamodel can predict temperature distribution with a reasonable level of accuracy, making it a valuable method for the analysis and optimization of thermomechanical systems.

Table 4 MAE for training and test set for the individual thermal metamodels

The results of the different algorithms exhibit varying performances in predicting the thermomechanical behavior of wet clutch systems. The Neural Network achieves the best results with the lowest MAE values in both training and validation, suggesting that it is effective in modeling complex nonlinear relationships between input parameters and clutch temperature. The diverse performances of the algorithms can be attributed to their ability to model complex relationships and handle nonlinearities. Models such as the Neural Network, Gaussian Process, and SVR are known for their capacity to model nonlinear relationships and make precise predictions. In contrast, Linear Regression and the Random Forest may be less accurate due to their limited ability to model nonlinear relationships.

Thermomechanical simulation using metamodels

Figure 15 displays the overall Mean Absolute Error (MAE) for each time step of the simulation. The general MAE error indicates how much the predicted temperature deviates from the temperature simulated using the Finite Element Method (FEM) at each point. Additionally, the deviation of the maximum temperature in the clutch system is examined in detail.

Fig. 15
figure 15

Mean Absolute Error (MAE) across all simulations, depicting the disparity between predicted and simulated temperatures at each time step of the simulation

Overall, it is evident that the general MAE error remains below 10 K for all time steps. However, the error's pattern in relation to time steps is not constant; slight fluctuations are observable. Peaks in the MAE error occur at time steps 5, 14, and 23. For all other time steps, the average overall error is less than 5 K. When examining the deviation of the maximum temperature, significantly stronger deviations are apparent. The curve's pattern, like the overall error, contains fluctuations but with much more pronounced peaks, all exceeding 15 K and reaching up to 27 K at certain time steps.

Observing the MAE per load step and comparing its pattern with the applied load profile, it becomes apparent that the MAE error peaks occur at specific time steps and coincide with load spikes in the load profile. This suggests that the predictive accuracy of the models depends on the specific load situations. When the clutch is subjected to particular loads, for example, during the time steps with peaks in the error, the models seem to have difficulty accurately predicting the actual temperatures.

This observation can be attributed to various factors. Firstly, the models may not precisely capture the nonlinear and complex relationships between the loads and temperature distributions during such load spikes. In these situations, phenomena that go beyond the model may occur, leading to larger discrepancies. Furthermore, the input data used can also play a role. Results may improve if additional data points from high-load situations are added to the training set.

Figure 16 compares the simulated and predicted temperature distributions for four load steps with two different load profiles. The figure presents both the results of the Finite Element Method (FEM) simulation and the results of the metamodeling. It becomes evident that for all four steps, there is a notable similarity between the FEM results and the metamodeling results, with hardly any visual differences.

Fig. 16
figure 16

Comparison between the FEM solution and the prediction using metamodels (Axial force 13,400 kN / Rotational speed 126 rpm)

Discussion/conclusion

In this work, a method for encoding and metamodeling a thermomechanical system to analyze the thermomechanical behavior of wet clutches is developed. By combining mechanical and thermal metamodels, efficient and rapid prediction of pressure and temperature distribution on the friction surfaces is enabled. Encoding temperature distribution, pressure, and heat flux density using PCA results in effective dimensionality reduction and precise data reconstruction. Various metamodeling techniques such as Linear Regression, Decision Trees, Random Forest, SVR, Gaussian Processes, and Neural Networks are examined and compared.

The results show that PCA encoding is an effective method for reducing data dimensionality. Particularly in encoding temperature distribution, good reconstruction can be achieved with just a few PCA components. It is found that using two PCA components is sufficient for encoding temperature distribution or pressure effectively, while three PCA components are determined to be optimal for heat flux density.

Metamodeling with different algorithms demonstrates varying performance, with the best results achieved using Random Forest for the mechanical and Neural Networks for the thermal metamodel. Validation of the metamodels yields high accuracy, showcasing their ability to model both mechanical and thermal relationships.

Overall, the developed metamodels represent a promising tool to support the design and analysis of wet clutches. A promising avenue for future research in this field includes investigating the influence of data volume on metamodel performance. It would be interesting to see how the prediction accuracy of the models improves with a larger amount of training data and whether there is a point at which adding more data no longer has a significant impact.

Another aspect that could be explored is the integration of additional input parameters into the metamodels. Factors such as plate thickness or thermal conductivity could play a role and impact the thermomechanical behavior of clutches. Considering these parameters could further enhance prediction accuracy. Additionally, testing alternative methods such as Physics Informed Neural Networks may be interesting. These approaches allow for the integration of physical laws and principles into Neural Network structures. By combining machine learning with physical knowledge, modeling accuracy could be further enhanced.

Data availability

The datasets generated or analyzed during the current study are available as supplementary material.

References

  1. Anderson AE, Knapp RA. Hot spotting in automotive friction systems. Wear. 1990;135(2):319–37. https://doi.org/10.1016/0043-1648(90)90034-8.

    Article  Google Scholar 

  2. Groetsch D, et al. Experimental investigations of spontaneous damage to wet multi-plate clutches with carbon friction linings. Forsch Ingenieurwes. 2021;85(4):1043–52. https://doi.org/10.1007/s10010-021-00492-9.

    Article  Google Scholar 

  3. Schneider T, Beiderwellen Bedrikow A, Völkel K, Pflaum H. Load capacity comparison of different wet multiplate clutches with sinter friction lining with regard to spontaneous damage behavior. Tribol Ind. 2022. https://doi.org/10.24874/ti.1256.02.22.04.

    Article  Google Scholar 

  4. Graf M, Ostermeyer G-P. Hot bands and hot spots: some direct solutions of continuous thermoelastic systems with friction. Phys Mesomech. 2012;15(5–6):306–15. https://doi.org/10.1134/S1029959912030113.

    Article  Google Scholar 

  5. Schneider T, Völkel K, Pflaum H, Stahl K. Einfluss von Vorschädigung auf das Reibungsverhalten nasslaufender Lamellenkupplungen im Dauerschaltbetrieb. Forsch Ingenieurwes. 2021;85(4):859–70. https://doi.org/10.1007/s10010-021-00540-4.

    Article  Google Scholar 

  6. Barber JR. Thermoelastic instabilities in the sliding of conforming solids. Proc R Soc Lond A. 1969;312(1510):381–94. https://doi.org/10.1098/rspa.1969.0165.

    Article  Google Scholar 

  7. Krempaszky C, Werner E, Lippmann H. Reibungsinduzierte thermoelastische instabilitäten von Kreisringplatten. Proc Appl Math Mech. 2004;4(1):197–8. https://doi.org/10.1002/pamm.200410080.

    Article  Google Scholar 

  8. Yi Y-B, Du S, Barber JR, Fash JW. Effect of geometry on thermoelastic instability in disk brakes and clutches. J Tribol. 1999;121(4):661–6. https://doi.org/10.1115/1.2834120.

    Article  Google Scholar 

  9. Zhao J, Yi Y-B, Li H. Effects of frictional material properties on thermoelastic instability deformation modes. Proc IME J J Eng Tribol. 2015;229(10):1239–46. https://doi.org/10.1177/1350650115576783.

    Article  Google Scholar 

  10. Kennedy FE, Ling FF. A thermal, thermoelastic, and wear simulation of a high-energy sliding contact problem. J Lubr Technol. 1974;96(3):497–505. https://doi.org/10.1115/1.3452024.

    Article  Google Scholar 

  11. Zagrodzki P. Numerical analysis of temperature fields and thermal stresses in the friction discs of a multidisc wet clutch. Wear. 1985;101(3):255–71. https://doi.org/10.1016/0043-1648(85)90080-8.

    Article  Google Scholar 

  12. Zagrodzki P. Analysis of thermomechanical phenomena in multidisc clutches and brakes. Wear. 1990;140(2):291–308. https://doi.org/10.1016/0043-1648(90)90091-N.

    Article  Google Scholar 

  13. Tirovic M, Day AJ. Disc brake interface pressure distributions. Proc Inst Mech Eng Pt D J Automobile Eng. 1991;205(2):137–46. https://doi.org/10.1243/PIME_PROC_1991_205_162_02.

    Article  Google Scholar 

  14. Zhao S, Hilmas GE, Dharani LR. Behavior of a composite multidisk clutch subjected to mechanical and frictionally excited thermal load. Wear. 2008;264(11–12):1059–68. https://doi.org/10.1016/j.wear.2007.08.012.

    Article  Google Scholar 

  15. Hwang P, Wu X. Investigation of temperature and thermal stress in ventilated disc brake based on 3D thermo-mechanical coupling model. J Mech Sci Technol. 2010;24(1):81–4. https://doi.org/10.1007/s12206-009-1116-7.

    Article  Google Scholar 

  16. Abdullah OI, Schlattmann J, Majeed MH, Sabri LA. The distribution of frictional heat generated between the contacting surfaces of the friction clutch system. Int J Interact Des Manuf. 2019;13(2):487–98. https://doi.org/10.1007/s12008-018-0480-x.

    Article  Google Scholar 

  17. Belhocine A, Abdullah OI. Design and thermomechanical finite element analysis of frictional contact mechanism on automotive disc brake assembly. J Fail Anal Preven. 2020;20(1):270–301. https://doi.org/10.1007/s11668-020-00831-y.

    Article  Google Scholar 

  18. Rouhi Moghanlou M, Saeidi Googarchin H. Three-dimensional coupled thermo-mechanical analysis for fatigue failure of a heavy vehicle brake disk: Simulation of braking and cooling phases. Proc Inst Mech Eng Pt D J Automobile Eng. 2020;234(13):3145–63. https://doi.org/10.1177/0954407020921711.

    Article  Google Scholar 

  19. Wang Z, Zhang J. Thermomechanical coupling simulation and analysis of wet multi-disc brakes during emergency braking. J Phys Conf Ser. 2021;1875(1):12005. https://doi.org/10.1088/1742-6596/1875/1/012005.

    Article  MathSciNet  Google Scholar 

  20. Schneider T, Dietsch M, Voelkel K, Pflaum H, Stahl K. Analysis of the Thermo-Mechanical Behavior of a Multi-Plate Clutch during Transient Operating Conditions Using the FE Method. Lubricants. 2022;10(5):76. https://doi.org/10.3390/lubricants10050076.

    Article  Google Scholar 

  21. Hoffer JG, Geiger BC, Ofner P, Kern R. Mesh-Free Surrogate Models for Structural Mechanic FEM Simulation: A Comparative Study of Approaches. Appl Sci. 2021;11(20):9411. https://doi.org/10.3390/app11209411.

    Article  Google Scholar 

  22. VurturBadarinath P, Chierichetti M, DavoudiKakhki F. A machine learning approach as a surrogate for a finite element analysis: status of research and application to one dimensional systems. Sensors. 2021. https://doi.org/10.3390/s21051654.

    Article  Google Scholar 

  23. Nie Z, Jiang H, Kara LB. Stress field prediction in cantilevered structures using convolutional neural networks. J Comput Inf Sci Eng. 2020;20(1):3627. https://doi.org/10.1115/1.4044097.

    Article  Google Scholar 

  24. Haghighat E, Raissi M, Moure A, Gomez H, Juanes R. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput Methods Appl Mech Eng. 2021;379: 113741. https://doi.org/10.1016/j.cma.2021.113741.

    Article  MathSciNet  Google Scholar 

  25. Cuomo S, Di Cola VS, Giampaolo F, Rozza G, Raissi M, Piccialli F. Scientific machine learning through physics-informed neural networks: where we are and what's next. 2022. [Online]. http://arxiv.org/pdf/2201.05624v4

  26. Jeong Y, Lee S-I, Lee J, Choi W, Surrogate modeling of structural mechanics problems via piecewise physics-informed neural networks, 2022.

  27. D’Addona DM, Antonelli D. Neural Network Multiobjective Optimization of Hot Forging. Procedia CIRP. 2018;67:498–503. https://doi.org/10.1016/j.procir.2017.12.251.

    Article  Google Scholar 

  28. Chan WL, Fu MW, Lu J. An integrated FEM and ANN methodology for metal-formed product design. Eng Appl Artif Intell. 2008;21(8):1170–81. https://doi.org/10.1016/j.engappai.2008.04.001.

    Article  Google Scholar 

  29. Lorente D, et al. A framework for modelling the biomechanical behaviour of the human liver during breathing in real time using machine learning. Expert Syst Appl. 2017;71:342–57. https://doi.org/10.1016/j.eswa.2016.11.037.

    Article  Google Scholar 

  30. Martínez-Martínez F, et al. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time. Comput Biol Med. 2017;90:116–24. https://doi.org/10.1016/j.compbiomed.2017.09.019.

    Article  Google Scholar 

  31. Mozaffar M, et al. Data-driven prediction of the high-dimensional thermal history in directed energy deposition processes via recurrent neural networks. Manufact Lett. 2018;18:35–9. https://doi.org/10.1016/j.mfglet.2018.10.002.

    Article  Google Scholar 

  32. Zobeiry N, Humfeld KD. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Eng Appl Artif Intell. 2021;101: 104232. https://doi.org/10.1016/j.engappai.2021.104232.

    Article  Google Scholar 

  33. Anandan Kumar H, Kumaraguru S, Paul CP, Bindra KS. Faster temperature prediction in the powder bed fusion process through the development of a surrogate model. Optics Laser Technol. 2021;141:107122. https://doi.org/10.1016/j.optlastec.2021.107122.

    Article  Google Scholar 

  34. Abio A, et al. Machine learning-based surrogate model for press hardening process of 22MnBB5 sheet steel simulation in industry 4.0. Materials. 2022. https://doi.org/10.3390/MA15103647.

    Article  Google Scholar 

  35. Schneider T, Beiderwellen Bedrikow A, Dietsch M, Voelkel K, Pflaum H, Stahl K. Machine learning based surrogate models for the thermal behavior of multi-plate clutches. ASI. 2022;5(5):97. https://doi.org/10.3390/asi5050097.

    Article  Google Scholar 

  36. Hou CKJ, Behdinan K. Dimensionality reduction in surrogate modeling: a review of combined methods. Data science and engineering. 2022;7(4):402–27. https://doi.org/10.1007/s41019-022-00193-5.

    Article  Google Scholar 

  37. Maćkiewicz A, Ratajczak W. Principal components analysis (PCA). Comput Geosci. 1993;19(3):303–42. https://doi.org/10.1016/0098-3004(93)90090-R.

    Article  Google Scholar 

  38. Jolliffe IT, Cadima J. Principal component analysis: a review and recent developments. Philos Trans A Math Phys Eng Sci. 2016;374(2065):20150202. https://doi.org/10.1098/rsta.2015.0202.

    Article  MathSciNet  Google Scholar 

  39. Viana FAC. A tutorial on Latin hypercube design of experiments. Qual Reliab Engng Int. 2016;32(5):1975–85. https://doi.org/10.1002/qre.1924.

    Article  Google Scholar 

  40. Huang X, Wu L, Ye Y. A review on dimensionality reduction techniques. Int J Patt Recogn Artif Intell. 2019;33(10):1950017. https://doi.org/10.1142/S0218001419500174.

    Article  Google Scholar 

  41. Salih Hasan BM, Abdulazeez AM. A Review of Principal Component Analysis Algorithm for Dimensionality Reduction. JSCDM. 2021. https://doi.org/10.30880/jscdm.2021.02.01.003.

    Article  Google Scholar 

  42. Murphy K. Machine learning—a probabilistic perspective. Cambridge: MIT Press; 2014.

    Google Scholar 

  43. Pedregosa F et al. Scikit-learn: Machine Learning in Python. 2012, https://doi.org/10.48550/arXiv.1201.0490.

  44. Abadi M et al. TensorFlow: A system for large-scale machine learning. 2015. [Online]. www.tensorflow.org

Download references

Acknowledgements

The presented results are based on the FVA no. 515/V research project; self-financed by the Research Association for Drive Technology e. V. (FVA). The authors would like to express thanks for the sponsorship and support received from the FVA and the project committee members.

Funding

Open Access funding enabled and organized by Project DEAL.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, T.S., A.B.; methodology, T.S., A.B.; data acquisition, A.B.; interpretation, T.S., A.B.; visualization, A.B.; writing—original draft preparation, A.B., T.S.; writing—review and editing, T.S., and K.S.; supervision, K.S.; resources, K.S.; project administration, T.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Thomas Schneider.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schneider, T., Bedrikow, A.B. & Stahl, K. Enhanced prediction of thermomechanical systems using machine learning, PCA, and finite element simulation. Adv. Model. and Simul. in Eng. Sci. 11, 14 (2024). https://doi.org/10.1186/s40323-024-00268-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-024-00268-0