Skip to main content

Table 1 Best cases of activation functions based on loss function

From: Physics-Informed Neural Network water surface predictability for 1D steady-state open channel cases with different flow types and complex bed profile shapes

Case

Activation function

RMSE %

MAE %

EF

Hidden layers

Neurons per hidden layer

Loss function

Case 1

Tanh

0.53

0.42

0.999

7

60

4.57E–04

ReLU

16.04

14.5

− 0.138

7

60

1.15E–01

Sin

0.99

0.77

0.996

5

40

3.77E–03

Sigmoid

0.58

0.46

0.999

7

60

1.17E–03

Case 2

Tanh

2.23

2.07

0.988

5

20

2.25E–03

ReLU

6.25

4.32

0.903

7

40

5.72E + 00

Sin

2.2

2.04

0.988

5

40

8.07E–03

Sigmoid

2.24

2.08

0.988

7

60

1.86E–03

Case 3

Tanh

26.69

12.08

0.554

5

60

7.42E–04

ReLU

42.95

39.65

− 0.154

3

60

2.85E + 00

Sin

26.11

11.29

0.573

3

20

9.85E–02

Sigmoid

25.4

12.91

0.596

3

40

5.78E–03

Direct Step Method

18.05

4.85

0.796

Case 4

Tanh

1.76

0.93

0.997

3

20

2.61E–06

ReLU

21.13

14.23

0.589

3

40

1.68E–01

Sin

4.04

1.77

0.985

7

40

2.39E–06

Sigmoid

4.56

2.67

0.981

7

60

1.56E–05

Case 5 Low Flow

Tanh

7.86

6.62

0.932

3

40

8.48E–02

ReLU

27.49

18.48

0.171

7

60

4.48E + 01

Sin

6.56

5.88

0.953

7

60

9.87E–02

Sigmoid

7.44

6.45

0.939

7

40

1.07E–01

Case 5 Mid Flow

Tanh

6.89

5.06

0.925

3

40

8.48E–02

ReLU

21.22

14.69

0.291

7

60

4.48E + 01

Sin

5.18

3.89

0.958

7

60

9.87E–02

Sigmoid

6.32

4.89

0.937

7

40

1.07E–01

Case 5 High Flow

Tanh

6.30

4.82

0.933

3

40

8.48E–02

ReLU

19.90

13.55

0.328

7

60

4.48E + 01

Sin

4.90

3.67

0.959

7

60

9.87E–02

Sigmoid

5.76

4.56

0.944

7

40

1.07E–01