Computational method for solving weakly singular Fredholm integral equations of the second kind using an advanced barycentric Lagrange interpolation formula

In this study, we applied an advanced barycentric Lagrange interpolation formula to find the interpolate solutions of weakly singular Fredholm integral equations of the second kind. The kernel is interpolated twice concerning both variables and then is transformed into the product of five matrices; two of them are monomial basis matrices. To isolate the singularity of the kernel, we developed two techniques based on a good choice of different two sets of nodes to be distributed over the integration domain. Each set is specific to one of the kernel arguments so that the kernel values never become zero or imaginary. The significant advantage of thetwo presented techniques is the ability to gain access to an algebraic linear system equivalent to the interpolant solution without applying the collocation method. Moreover, the convergence in the mean of the interpolant solution and the maximum error norm estimation are studied. The interpolate solutions of the illustrated four examples are found strongly converging uniformly to the exact solutions.

Page 2 of 22 Shoukralla et al. Adv. Model. and Simul. in Eng. Sci. (2021) 8:27 endpoints of the integration domain and has weakly logarithmic kernels [6][7][8][9][10][11][12][13][14]. Dmitriev et al. [6] provided an iterative method for solving the Fredholm integral equation of the first kind with a weakly singular logarithmic kernel and a nonsingular unknown function. Shoukralla [7,8] presented two methods for the solution of the logarithmic singular kernel Fredholm integral equation with a singular unknown function. The kernel singularity is isolated analytically, depending on the Kantorovich technique, and the unknown functions are approximated on the basis of the Taylor and Chebyshev polynomials with an analytical treatment of the singularity. These techniques provided acceptable solutions at that time, regardless of the difficulty and the complexity of the two procedures. Shoukralla [9,10] solved the same integral equation on the basis of Chebyshev polynomials of the second kind, thus providing high-accuracy results. Shoukralla et al. [11][12][13] solved a certain class of singular Fredholm integral equation of the first kind with singular logarithmic kernels and singular unknown functions on the basis of monic and economized monic Chebyshev functions with different approaches for removing the singularities. In this study, we focus on the numerical solution of the second kind of Fredholm equations with another type of kernel's singularity, which needs another technique different from those for the singular logarithmic kernels of Fredholm equations of the first kind. Many methods for solving weakly singular Fredholm integral equations of the second kind have been published [14][15][16][17][18][19][20][21][22][23][24][25]. For example, Yin et al. [14], used the Jacobi-Gauss quadrature formula to approximate the integral operator in the numerical implementation of the spectral collocation method and established the spectral Chebyshev collocation method for solving Fredholm integral equations of the second kind with the weakly singular kernel. This method shows that the errors of the approximate solution decay exponentially in infinity and weighted norms. Behzadi et al. [15] developed some modifications on the generalization of the Euler-Maclaurin summation formula by using Bernoulli functions to construct such generalized quadrature and to construct a numerical method based on the trapezoidal rule for solving weakly singular integral equations.
In this study, we make progress toward the application of some advanced single and double barycentric Lagrange interpolation formulas and how to adapt them to be applicable to completely isolating the kernel singularity and find accurate solutions of the weakly singular Fredholm equations of the second kind. Shoukralla et al. [26][27][28][29] developed a new version of the traditional barycentric Lagrange interpolation [30] and applied it successfully to solve linear and nonsingular Volterra integral equations of the second kind. For weakly singular Fredholm equations of the second kind, the matter is more complicated due to some difficulties related to the singularity of the kernel.
This study focuses on the application of two advanced barycentric interpolation formulas to solve the weakly singular Fredholm integral equation of the second kind with two innovative techniques for the treatment of the kernel's singularity depending on the perfect choice of the node distribution rules. Naturally, our primary aim is to reduce computation complexity, but whether or how fast the interpolant solution converges to the exact one is at least as important.
One of the advantages of the presented techniques is the designed rules for the distribution of nodes so that they always remain within the domain of the integration and never be outside. Moreover, these rules are designed so that the difference between the nodes subjected to the two variables of the kernel remains always positive. Thus, based on this idea, the numerical solutions become stable [31] on the whole interval even at the endpoints, as shown in the solved examples.
We begin by interpolating the unknown and data functions by using the advanced single matrix-form barycentric interpolate polynomials; each is expressed through four matrices, and one of which is the monomial basis functions matrix [32]. The weakly singular kernel is then interpolated twice concerning its two arguments; the first interpolation concerning the first variable of a positive sign with node distribution on the right half of the integration domain, and the second interpolation concerning the variable of negative sign and the distribution of the nodes will be held on the other left half of the integration domain. This generous scheme ensures that the difference between the kernel's two variables always remains positive, thus completely erasing the singularity of the kernel and expressing it through five matrices; two of which are monomial basis matrices.
Additional advantages of these techniques are not only to simplify the calculations but also to gain access to an equivalent linear system of algebraic equations without applying the collocation method. The implementation of this idea is achieved by substituting the interpolant unknown function on both sides of the integral equation. By solving the obtained algebraic linear system directly, the unknown coefficient matrix can be found, consequently finding the interpolant unknown function. The six examples are solved by using the two presented techniques; examples 1-4 are for weakly singular equations, while examples 5-6 are for linear nonsingular equations. The first three examples are also solved by a trapezoidal approach mentioned in [14], whereas the fourth boundary integral equation, which arises from the problem of radiation, potential theory, scattering theory, electromagnetism, and hydrodynamics, is solved in [15]. The solutions of examples 1-4 which are obtained by the two presented techniques are found to strongly converge to the exact solutions compared with [14,15]. The solutions to examples 5-6 which are obtained by the two presented techniques are found to be equal to the exact solutions. The given tables and graphs demonstrate the originality, eligibility, and accuracy of the presented new method.

Advanced barycentric Lagrange interpolation formula
The question of expressing a given function by an interpolant is vital in approximation theory, as well as in computational methods. A remarkable advantage of Lagrange interpolation is its independence of the arrangement of the selected nodes, although the efficient results require a few nodes. By contrast, the increasing number of nodes leads to complicate the scheme and the instability of the numerical solution. That is, the barycentric Lagrange interpolation is a fantastic formula to increase the performance of traditional Lagrange interpolation. In this section, we provide a different mathematical formula in form and content that exceeds the well-known traditional barycentric Lagrange formula. The new formula is expressed through matrices; one of them is the monomial basis matrix, which is canceled in the solution's procedure. Thus, the steps of the solution are reduced, the round-off error is minimized, and high-precision solutions are provided.
Let the function f (x) be defined on [a, b] as the tabulated function f (x i ) = f i ; i = 0, n for the set of (n + 1) equally spaced distinct nodes {x i } n i=0 such that x i = a + ih , where the step size h is defined by h = b−a n . Then, Berrut et al. [14] provided the barycentric Lagrange interpolating polynomial of degree n , f n (x) , which interpolates the tabulated function f (x i ) = f i such that f n (x i ) = f (x i ) = f i in the following form: Although Formula (1) is simpler than the traditional Lagrange interpolating polynomial [14], it is still difficult to apply for interpolating the unknown functions, as well as the kernels of integral equations of any types and kinds because it hinders the facilitation of the steps of the solution and causes some computational impediments. Therefore, we adapt this formula before using it so that it becomes easier to apply for solving integral equations. Using some operational matrix algebra, we can increase the computational efficiency of Formula (1) and achieve an improved matrix formula by expanding the numerator and distributing it on the denominator by separating the barycentric weights w i . Thus, we obtain f n (x) in the modified matrix form Here, �(x) is the 1 × (n + 1) row matrix, W = diag{w 0 , w 1 , .., w n } is the (n + 1) × (n + 1) square diagonal matrix whose entries w i are defined by (1), and F T =[f i ] n i=0 is (n + 1) × 1 column matrix whose entries f i are the functional values of f (x) such that By studying the behavior of matrix Formula (2), we can perform analysis so that the so-called monomial matrix is separated. This idea can be implemented by extracting the coefficients of the barycentric functions of the matrix �(x) . Moreover, the numerator and the denominator have common factors, and these factors annihilate each other. This idea gives us the incentive to rearrange the terms of each barycentric function in the ascending power of x or simply expand each function into a Maclaurin polynomial in a matrix form of multiplied two matrices; one of which is the monomial basis matrix, and the other is the Maclaurin coefficient. On the basis of this idea, Formula (2) is expressed via four matrices as follows: where the 1 × (n + 1) monomial basis row matrix X(x) and the (n + 1) × (n + 1) square Maclaurin coefficient matrix C are defined by (2) f n (x) = �(x)WF. (3) (4) f n (x) = X(x)CWF, Thus, we have derived a simple and magnificent matrix formula for raising the computational efficiency of the traditional barycentric Lagrange interpolation (1) that can be easily applicable to find the interpolant polynomial of any function f (x) defined on the interval [a, b] . We name the right-hand side of (4) "advanced barycentric Lagrange single interpolation formula. " Applying Formula (4) for interpolating the data and unknown functions of integral equations remarkably reduces the solution steps due to some operational matrix abbreviations, considerably contributing to the reduction in round-off errors and saving time. Now, we apply the new Formula (4) to solve weakly singular Fredholm integral equations of the second kind.

Advanced barycentric interpolation formulas for solving weakly singular fredholm integral equations of the second kind
Here, we present two new techniques for solving weakly singular Fredholm integral equations of the second kind. This method starts by interpolating the unknown and data functions using Formula (4). As for the kernel, we use Formula (4) twice to obtain a double interpolant polynomial through five matrices. In this manner, we provide two techniques for choosing the distribution nodes of the two main variables x and t of the kernel. In the first technique, the x−nodes are distributed on the right half of the integration domain, whereas the t−nodes are distributed on the left half. The step sizes for the two sets of nodes depend on some real numbers δ 1 , δ 2 ≥ 0 that depend on the degree of the interpolation degrees. In the second technique, we present two different sets of node distributions corresponding to two variables, all of which are distributed on the entire integration domain. Consider the weakly singular Fredholm integral equation of the second kind.
where ϕ(x) is a given function, and u(x) is the unknown function defined on L 2 [a, b] . Here, the given kernel k(x, t) takes the form k( |u(x)| ≤ L for N , M, L are assumed to be real numbers.

The first technique
Let φ n (x) be the single interpolant polynomial that interpolates ϕ(x) of (6) on the basis of By using the new Formula (4), ϕ(x) can be replaced by its interpolant polynomial φ n (x) of degree n in the matrix form where P = CW is the (n + 1) × (n + 1) square matrix, and is the (n + 1) × 1 column matrix such that and c ij are calculated by (5). Similarly, the unknown function u(x) , as well as ϕ(x) , can be interpolated to obtain its unknown single interpolant polynomial ũ n (x) in the following matrix form: are the undetermined coefficients of the unknown single interpolant polynomial.
Consequently, for the weakly singular kernel k(x, t) = 1 |x−t| α , which is singular when x → t , we interpolate it twice; the first interpolation is performed with respect to x , and the second is performed with respect to t so that we can obtain the double interpolant polynomial k n,n (x, t) of two variables x and t . The mathematical properties of the kernel force us to design an innovative new technique that has the potential to remove this singularity. This goal can only be achieved under the important and necessary condition that x > t . Thus, we adopt an approach based on the appropriate choice of two different sets of nodes; the first set x i n i=0 is distributed on the righthalf interval of the integration domain b−a 2 , b , and the second set of nodes t i n i=0 is distributed on the left-half interval a, b−a 2 . This yields two barycentric function summations ρ(x),ρ(x) ; the first summation ρ(x) corresponds to the set of nodes x i n i=0 and the barycentric functions , whereas the second barycentric function summation ρ(x) corresponds to the set of nodes t i n i=0 and the bar- We define x i and t i as follows: The two summations ρ(x),ρ(x) are defined by By using the same strategy used to drive Formula (4), the kernel k(x, t) can be interpolated using the set of nodes x i n i=0 via four matrices as follows: where K x i , t is the column matrix such that In the same context, we again interpolate each function k x i , t for i = 0, n by using the set of nodes t j n j=0 . After strenuous substitution and abbreviations, which are performed using some matrix operations, we obtain the kernel through five matrices; two of which are the monomial basis function matrices, that is, the row monomial basis function matrix X(x) subjected to x and the column monomial basis function matrix X T (t) subjected to t . Thus, we obtain the advanced barycentric double interpolant polynomial k n,n (x, t) via five matrices as follows: where the (n + 1) × (n + 1) square matrix K is calculated as follows: Here, A T = a ij n i,j=0 and B T = b ij n i,j=0 are (n + 1) × (n + 1) square matrices whose entries a ij and b ij can be calculated by Moreover, substituting k n,n (x, t) given by (13) and ũ n (t) given by (9) into the right side of (6), we obtain ũ n (x) in the following matrix form: where the (n + 1) × (n + 1) square matrix N and the (n + 1) × (n + 1) square matrix X (t) are defined by By integrating the right side of (16), we obtain Here, the (n + 1) × (n + 1) square matrix H is given by Furthermore, by replacing ũ n (x) defined by (9) with u(x) on the left side of (6) and replacing k n,n (x, t)ũ n (t) with u(t)k(x, t) on the right side, we obtain Simplifying (20) yields the linear algebraic system By applying any direct method, we can solve system (21) to obtain the unknown coefficient column matrix U:    Accordingly, the interpolant solution that was given by (9) then takes the simple matrix form where is the (n + 1) × 1 column matrix The entries {γ i } n i=0 of can be easily calculated from the product of the multiplied three known coefficient matrices M −1 P . Hence, the interpolant polynomial solution of the considered integral Eq. (6) is given by

The second technique
We choose the two sets of nodes x i n i=0 and t j n j=0 ; each (n + 1) equally spaced distinct node corresponds to the two variables x, t . These sets of nodes are distributed on the whole domain [a, b] and never come outside. Based on these two sets of nodes that depend on step sizes h 1 , h 2 , which by extension depend on some positive numbers δ 1 ≥ 0, δ 2 ≥ 0 , we define and Based on the modified matrix forms (2)-(5), we obtain ũ n (x) and f n (x) in the form The kernel k(x, t) is now interpolated twice; the first interpolation is performed with respect to the argument x , whereas the second interpolation is performed with respect to the argument t in inverse matrix orders. Thus, we obtain the modified barycentric double interpolant kernel k n,n (x, t) in the form Here, N T (t) = n j (t) n j=0 is the (n + 1) × 1 column matrix of the barycentric functions n j (x) , where and the known square matrix K is given by (23)  By virtue of Eqs. (28) and (29), the product of the single interpolant unknown function ũ n (t) by the double interpolated kernel k n,n (x, t) can be replaced by the following matrix form: Now, replacing k(x, t)u(t) in the right side of (6) with k n,n (x, t)u n (t) given by (31), we obtain ũ n (x) in the form Moreover, by replacing the matrix-vector single interpolant ũ n (x) that was given by (28) into both sides of (6), replacing the matrix-vector double interpolated kernel for k n,n (x, t) that was given by (29) with k(x, t) , and replacing f (t) with f n (t) that was given by (28)

Convergence and error analysis
In this section, we study the convergence in the mean [33,34] of the interpolant unknown function ũ n (x) described in first technique to the exact solution u(x).      The goal now is to estimate the error of interpolation. We Sample the total error of the approximation by ε n (x) . Thus, ε n (x) = �u(x) −ũ n (x)� 2 , where . 2

Computational results and discussions
Based on the two presented techniques, we designed two MATLAB R2019b codes for the solution of four weakly singular Fredholm integral equations of the second kind. We find the interpolant solutions for the six examples by applying the two given techniques and compare the obtained results with the exact solutions. The obtained interpolated solutions to examples 1-4 strongly converge to the exact ones faster than the methods mentioned in [20,21]. Moreover, the interpolation is found easily and uniformly rather than these complicated results that are mentioned in [20,21], as shown in the given tables and figures that are superior. The obtained interpolate solutions to the examples 5-6 for nonsingular equations are found equal to the exact solution. We denoted the exact solution by u ex (x) and the interpolant solutions obtained by using the first and second techniques are denoted by ũ 1 n (x) and ũ 2 n (x) , respectively, where n denotes the interpolant degree.  Table 1 The absolute errors R 1 n (x i ) for n = 2, 5 The Absolute Erorrs for n=2

Example 1 Consider the integral equation,
The Absolute Erorrs for n=5 Fig. 1 The First Technique Table 2 The absolute errors R 2 n (x i ) for n = 4, 6 The Absolute Erorrs for n=4 The Absolute Erorrs for n=6 whose exact solution is given by u ex (x) = x 2 [20]. Using the first technique for δ 1 = δ 2 = 0 with x = 1 in the kernel, we obtained uniformly interpolated polynomials ũ 1 n (x) for n = 2, 5 . The CPU time for n = 2 was 8.294 s and for n = 5 was 12.893 s. We evaluated the exact solution values u ex (x i ) and ũ 1 n (x i ) for n = 2, 5 at the set of nodes x i = 0 : 0.1 : 1.0 and then estimated the absolute errors R 1 n (x i ) = u ex (x i ) −ũ 1 n (x i ) for n = 2, 5 as shown in Table 1. In Fig. 1, plotted are the graphs of absolute errors R 1 n (x i ) for n = 2, 5 . Using the second technique for n = 4, 6 and δ 1 = 0 , δ 2 = 1/150 , we obtained the uniformly interpolated polynomials ũ 2 4 (x),ũ 2 6 (x) . The total CPU time for n = 4 was 12.356 s and for n = 6 was 18.000 s. We evaluated u ex (x i ) and ũ 2 n (x i ) for n = 4, 6 at the set of nodes x i = 0 : 0.1 : 1.0 and hence estimated the absolute errors R 2 n (x i ) = u ex (x i ) −ũ 2 n (x i ) for n = 4, 6 as shown in Table 2. In Fig. 2, plotted are the graphs of the absolute errors R 2 n (x i ) for n = 4, 6.
Example 2 Consider the integral equation, Table 3 The absolute errors R 1 n (x i ) for n = 2, 3 The absolute Erorrs for n=2 The absolute Erorrs for n=3 whose exact solution is given by u ex (x) = √ x [20]. Using the first technique for n = 2, 3 , and δ 1 = δ 2 = 0 while x = 1 in the kernel, we obtained the uniformly interpolated polynomials ũ 1 2 (x),ũ 1 3 (x) . The CPU total time for n = 2 was 8.282 s and for n = 3 was 9.013 s. We evaluated the exact solution values u ex (x i ) and the interpolated polynomials ũ 1 2 (x i ),ũ 1 3 (x i ) at the set of nodes x i = 0 : 0.1 : 1.0 and hence estimated the absolute errors R 1 n (x i ) = u ex (x i ) −ũ 1 n (x i ) for n = 2, 3 as shown in Table 3. In Fig. 3, plotted are the graphs of the absolute errors R 1 n (x i ) for n = 2, 3. Using the second technique for n = 8, 12 and δ 1 = 0, δ 2 = 1/300, we obtained the uniformly interpolated polynomials Table 4 The absolute errors R 2 n (x i ) for n = 8, 12 The Absolute Erorrs for n=8 The Absolute Erorrs for n=12 Fig. 4 The Second Technique Table 5 The absolute errors R 1 n (x i ) for n = 3, 5  Table 4. In Fig. 4, plotted are the graphs the absolute errors R 2 n (x i ) for n = 8, 12.

Example 3 Consider the integral equation,
whose exact solution is given by u ex (x) = e x [20]. Using the first technique for δ 1 = δ 2 = 0 with x = 1 in the kernel, we obtain ũ 1 n (x) for n = 3, 5 . The CPU total time for n = 3 was 8.882 s and for n = 5 was 11.830 s. We evaluated the exact solution values The Absolute Erorrs for n=3 The Absolute Erorrs for n=5 Fig. 5 The First Technique Table 6 The absolute errors R 2 n (x i ) for n = 6, 8 The Absolute Erorrs for n=6 The Absolute Erorrs for n=8  Table 6. In Fig. 6, plotted are the graphs of the absolute errors R 2 n (x i ) for n = 6, 8.

Example 4 Consider the integral equation,
f (x) = x 2 1 − x 2 − 27 30800 x 8/3 54x 2 − 126x + 77 + (1 − x) 8/3 54x 2 + 18x + 5 , Table 7 The absolute errors R 1 2 (x i ) and R 1 5 (x i ) The Absolute Erorrs for n=2 The Absolute Erorrs for n=5 The Absolute Erorrs for n=10 whose exact solution is given by u ex (x) = x 2 (1 − x) 2 [21]. Using the first technique for n = 2, 5 and δ 1 = 0, δ 2 = 1/5 , we obtain ũ 1 2 (x),ũ 1 5 (x) . The CPU total time for n = 2 was 9.778s and for n = 5 was 15.501s. We evaluated the exact solution values u ex (x i ) and the interpolated solution values ũ 1 2 (x i ),ũ 1 5 (x i ) at the set of nodes x i = 0 : 0.1 : 1.0 and hence estimated the absolute errors R 1 n (x i ) = u ex (x i ) −ũ 1 n (x i ) for n = 2, 5 as shown in Table 7. In Fig. 7, plotted are the graphs of the absolute errors R 1 n (x i ) for n = 2, 5 . Using the second technique for n = 2, 3, 4 and δ 1 = 1/15, δ 2 = 0, we obtained ũ 2 2 (x i ),ũ 2 3 (x i ),ũ 2 4 (x i ) at the set of nodes x i = 0 : 0.1 : 1.0 . The CPU total time for n = 2 was 10.331 s, for n = 3 was 11.713 s, and for n = 4 was 15.699 s. Table 8, contains the absolute errors R 2 n (x i ) = u ex (x i ) −ũ 2 n (x i ) for n = 2, 3, 4 at the set of nodes x i = 0 : 0.1 : 1.0 . In Fig. 8, plotted are the graphs of the absolute errors R 2 2 (x i ) , R 2 3 (x i ) and R 2 4 (x i ) at x i = 0 : 0.1 : 1.0. (60) u(x) = e −x + 1 0 e x+t u(t)dt; 0 ≤ x ≤ 1 Table 8 The absolute errors R 2 n (x i ) for n = 2, 3, 4  The Absolute Erorrs for n=2 The Absolute Erorrs for n=3 The Absolute Erorrs for n=4  whose exact solution is given by u ex (x) = e −x + 2e x 3−e 2 [35]. Using the first technique for δ 1 = δ 2 = 0 , we obtained uniformly interpolated polynomials ũ 1 n (x) for n = 2, 5, 10 . The CPU time for n = 2 was 7.987 s and for n = 5 was 12.859 s, and for n = 10 was 36.077 s. We evaluated the exact solution values u ex (x i ) and ũ 1 n (x i ) for n = 2, 5, 10 at the set of nodes x i = 0 : 0.1 : 1.0 and then estimated the absolute errors R 1 n (x i ) = u ex (x i ) −ũ 1 n (x i ) for n = 2, 5, 10 as shown in Table 9. In Fig. 9, plotted are the graphs of absolute errors R 1 n (x i ) for n = 2, 5, 10 . Using the second technique for n = 2, 5, 10 and δ 1 = δ 2 = 0 , we obtained the uniformly interpolated polynomials ũ 2 n (x) for n = 2, 5, 10 . The CPU total Table 9 The absolute errors R 1 n (x i ) for n = 2, 5, 10  The Absolute Erorrs for n=2 The Absolute Erorrs for n=5 The Absolute Erorrs for n=10 The Absolute Erorrs for n=2 The Absolute Erorrs for n=5 The Absolute Erorrs for n=10 time for n = 2 was 8.086 s, for n = 5 was 13.579 s and for n = 10 was 31.443 s. We evaluated u ex (x i ) and ũ 2 n (x i ) for n = 4, 6 at the set of nodes x i = 0 : 0.1 : 1.0 and hence estimated the absolute errors R 2 n (x i ) = u ex (x i ) −ũ 2 n (x i ) for n = 4, 6 as shown in Table 2. In Fig. 2, plotted are the graphs of the absolute errors R 2 n (x i ) for n = 4, 6. The generated interpolated solutions strongly converge to the exact ones when employing either the first or second technique (Table 10, Fig. 10).
Example 6 Consider the nonsingular Fredholm integral equation of the second kind. whose exact solution is given by u ex (x) = x [36]. Using formula (10) of the first technique for δ 1 = δ 2 = 0 , we obtained the interpolated polynomials ũ 1 n (x) for n = 5, 6 . The CPU time for n = 5 was 11.474 s and for n = 6 was 11.777 s. We evaluated the exact solution values u ex (x i ) and ũ 1 n (x i ) for n = 5, 6 at the set of nodes x i = −1 : 0.2 : 1.0 and then estimated the absolute errors. It turns out that R 1 n (x i ) = u ex (x i ) −ũ 1 n (x i ) = 0 for n = 5, 6 . Using the formulas (26) and (27) of the second technique for δ 1 = δ 2 = 0 , we obtained the interpolated polynomials ũ 2 n (x) for n ≥ 2 exactly equal to the exact solution. The CPU total time for n = 2 was 9.659 s.

Conclusion
We modified the traditional barycentric Lagrange interpolation formula and expressed it as a product of four matrices; one of which is the monomial basis function matrix. Based on this advanced formula, we presented two techniques for finding the interpolate solutions of weakly singular Fredholm integral equations of the second kind. The kernel is interpolated twice with respect to both variables and thus has been expressed via five matrices. The advantage of the presented techniques is that we can isolate the singularity of the kernel and easily find the interpolant solution in matrix form without applying the collocation method. The most important advantage lies in the idea of the given rules  for choosing two different sets of interpolation nodes associated with the kernel's two variables so that the square root of the kernel remains greater than zero and has nonimaginary values. Thus, the singularity is completely removed. The convergence in the mean and the error norm estimation are studied. The interpolate solutions of the first four illustrated examples to weakly singular equations are found strongly converge uniformly to the exact ones. The convergence of the obtained solutions is faster than those obtained by other cited methods. The interpolate solutions of the fifth and sixth examples to nonsingular equations are found equal to the exact ones. Thus, the efficiency and genuineness of the given method are confirmed.