Here, we present two new techniques for solving weakly singular Fredholm integral equations of the second kind. This method starts by interpolating the unknown and data functions using Formula (4). As for the kernel, we use Formula (4) twice to obtain a double interpolant polynomial through five matrices. In this manner, we provide two techniques for choosing the distribution nodes of the two main variables \(x\) and \(t\) of the kernel. In the first technique, the \(x{ - }\)nodes are distributed on the right half of the integration domain, whereas the \(t{ - }\)nodes are distributed on the left half. The step sizes for the two sets of nodes depend on some real numbers \(\delta_{1} ,\delta_{2} \ge 0\) that depend on the degree of the interpolation degrees. In the second technique, we present two different sets of node distributions corresponding to two variables, all of which are distributed on the entire integration domain. Consider the weakly singular Fredholm integral equation of the second kind.
$$ u\left( x \right) = \varphi \left( x \right) + \int\limits_{a}^{b} {k\left( {x,t} \right)u\left( t \right)dt} ;\quad {\text{a}} \le x \le b, $$
(6)
where \(\varphi \left( x \right)\) is a given function, and \(u\left( x \right)\) is the unknown function defined on \({\text{L}}^{2} \left[ {a,b} \right]\). Here, the given kernel \(k\left( {x,t} \right)\) takes the form \(k\left( {x,t} \right) = \frac{1}{{\left| {x - t} \right|^{\alpha } }}\); \(0 < \alpha < 1\). Moreover, \(\mathop {\max }\limits_{{x,t \in \left[ {a,b} \right]}} \left| {k\left( {x,t} \right)} \right| \le N\), \(\mathop {\max }\limits_{{x \in \left[ {a,b} \right]}} \left| {\varphi \left( x \right)} \right| \le M\), \(\mathop {\max }\limits_{{x \in \left[ {a,b} \right]}} \left| {u\left( x \right)} \right| \le L\) for \(N,M,L\) are assumed to be real numbers.
The first technique
Let \(\tilde{\varphi }_{n} \left( x \right)\) be the single interpolant polynomial that interpolates \(\varphi \left( x \right)\) of (6) on the basis of Formula (4) such that \(\tilde{\varphi }_{n} \left( x \right) \approx \varphi \left( x \right)\) and \(\tilde{\varphi }_{n} \left( {x_{i} } \right) = \varphi \left( {x_{i} } \right)\) for the set of equidistant nodes \(\left\{ {x_{i} } \right\}_{i = 0}^{n} ;x_{i} = a + ih,h = \frac{b - a}{n}\). By using the new Formula (4), \(\varphi \left( x \right)\) can be replaced by its interpolant polynomial \(\tilde{\varphi }_{n} \left( x \right)\) of degree \(n\) in the matrix form
$$ \tilde{\varphi }_{n} \left( x \right) = {\rm X}\left( x \right){\text{CW}}\Phi = {\rm X}\left( x \right){\rm P}\Phi ; \, {\rm P}{\text{ = CW,}} $$
(7)
where \({\rm P}{\text{ = CW}}\) is the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix, and \(\Phi\) is the \(\left( {n + 1} \right) \times 1\) column matrix such that
$$ {\rm P}{\text{ = CW,}}\quad {\rm P}^{T} { = }\left[ {p_{ij} } \right]_{i,j = 0}^{n} ;\quad p_{ij} = c_{ij} w_{i} ;\quad i,j = \overline{0,n}, \quad\Phi^{T} = \left[ {\varphi_{i} } \right]_{I = 0}^{N}; \quad \varphi_{i} = \varphi \left( {x_{i} } \right); \quad i = \overline{0,n} , $$
(8)
and \(c_{ij}\) are calculated by (5). Similarly, the unknown function \(u\left( x \right)\), as well as \(\varphi \left( x \right)\), can be interpolated to obtain its unknown single interpolant polynomial \(\tilde{u}_{n} \left( x \right)\) in the following matrix form:
$$ \tilde{u}_{n} \left( x \right) = {\rm X}\left( x \right){\rm P}{\text{U,}} $$
(9)
where \({\text{U }} = \left[ {u_{i} } \right]_{i = 0}^{n}\) is the \(\left( {n + 1} \right) \times 1\) unknown coefficient column matrix to be determined, where the entries \(\left\{ {u_{i} } \right\}_{i = 0}^{n}\) are the undetermined coefficients of the unknown single interpolant polynomial.
Consequently, for the weakly singular kernel \(k\left( {x,t} \right) = \frac{1}{{\left| {x - t} \right|^{\alpha } }}\), which is singular when \(x \to t\), we interpolate it twice; the first interpolation is performed with respect to \(x\), and the second is performed with respect to \(t\) so that we can obtain the double interpolant polynomial \(\tilde{k}_{n,n} \left( {x,t} \right)\) of two variables \(x\) and \(t\). The mathematical properties of the kernel force us to design an innovative new technique that has the potential to remove this singularity. This goal can only be achieved under the important and necessary condition that \(x > t\). Thus, we adopt an approach based on the appropriate choice of two different sets of nodes; the first set \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) is distributed on the right-half interval of the integration domain \(\left[ {\frac{b - a}{2},b} \right]\), and the second set of nodes \(\left\{ {\tilde{t}_{i} } \right\}_{i = 0}^{n}\) is distributed on the left-half interval \(\left[ {a,\frac{b - a}{2}} \right]\). This yields two barycentric function summations \(\rho \left( x \right),\tilde{\rho }\left( x \right)\); the first summation \(\rho \left( x \right)\) corresponds to the set of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) and the barycentric functions \(\varpi_{i} \left( x \right) = \frac{{\zeta_{i} \left( x \right)}}{\rho \left( x \right)}\) for \(\zeta_{i} \left( x \right) = \frac{1}{{x - \tilde{x}_{i} }}\), whereas the second barycentric function summation \(\tilde{\rho }\left( x \right)\) corresponds to the set of nodes \(\left\{ {\tilde{t}_{i} } \right\}_{i = 0}^{n}\) and the barycentric functions \(\tilde{\varpi }_{i} \left( t \right) = \frac{{\tilde{\zeta }_{i} \left( t \right)}}{{\tilde{\rho }\left( t \right)}}\) for \(\tilde{\zeta }_{i} \left( t \right) = \frac{1}{{t - \tilde{t}_{i} }}\). We define \(\tilde{x}_{i}\) and \(\tilde{t}_{i}\) as follows:
$$ \tilde{x}_{i} = a + 0.5 + ih_{1} ; \quad h_{1} = \frac{{b - a - 4\delta_{1} }}{2n}, \quad \tilde{t}_{i} = a + ih_{2} ; \quad h_{2} = \frac{{b - a - 4\delta_{2} }}{2n};\quad i = \overline{0,n} . $$
(10)
We choose \(\delta_{1} ,\delta_{2} \ge 0\) such that \(\frac{b - a}{2} < h_{1} < b\) and \(a < h_{2} < \frac{b - a}{2}\). Moreover, we put \(h_{2} = \frac{{b - 3a - 4\delta_{2} }}{n + 0.1}\) for the kernel of the form \(\left| {1 - t} \right|^{ - 1/2}\), that is, if \(x = 1.\) The two summations \(\rho \left( x \right),\tilde{\rho }\left( x \right)\) are defined by
$$ \rho \left( x \right) = \sum\limits_{i = 0}^{n} {w_{i} \zeta_{i} \left( x \right)} ; \, \tilde{\rho }\left( t \right) = \sum\limits_{i = 0}^{n} {w_{i} \tilde{\zeta }_{i} \left( t \right)} {; }\zeta_{i} \left( x \right) = \frac{1}{{x - \tilde{x}_{i} }},\tilde{\zeta }_{i} \left( t \right) = \frac{1}{{t - \tilde{t}_{i} }}. $$
By using the same strategy used to drive Formula (4), the kernel \(k\left( {x,t} \right)\) can be interpolated using the set of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) via four matrices as follows:
$$ \tilde{k}_{n,n} \left( {x,t} \right) = {\rm X}\left( x \right){\text{CW}}{\rm K}\left( {\tilde{x}_{i} ,t} \right), $$
(11)
where \({\rm K}\left( {\tilde{x}_{i} ,t} \right)\) is the column matrix such that
$$ {\rm K}^{T} \left( {\tilde{x}_{i} ,t} \right) = \left[ {\begin{array}{*{20}c} {k\left( {\tilde{x}_{0} ,t} \right)} & {k\left( {\tilde{x}_{1} ,t} \right)} & {k\left( {\tilde{x}_{2} ,t} \right)} & {...} & {k\left( {\tilde{x}_{n} ,t} \right)} \\ \end{array} } \right]. $$
(12)
In the same context, we again interpolate each function \(k\left( {\tilde{x}_{i} ,t} \right)\) for \(i = \overline{0,n}\) by using the set of nodes \(\left\{ {\tilde{t}_{j} } \right\}_{j = 0}^{n}\). After strenuous substitution and abbreviations, which are performed using some matrix operations, we obtain the kernel through five matrices; two of which are the monomial basis function matrices, that is, the row monomial basis function matrix \({\rm X}\left( x \right)\) subjected to \(x\) and the column monomial basis function matrix \({\rm X}^{T} \left( t \right)\) subjected to \(t\). Thus, we obtain the advanced barycentric double interpolant polynomial \(\tilde{k}_{n,n} \left( {x,t} \right)\) via five matrices as follows:
$$ \tilde{k}_{n,n} \left( {x,t} \right) = {\rm X}\left( x \right){\text{AKB}}{\rm X}^{T} \left( t \right), $$
(13)
where the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\text{K}}\) is calculated as follows:
$$ {\text{K}} = \left[ {w_{ij} k_{ij} } \right]_{i,j = 0}^{n} ; \, k_{ij} = k\left( {\tilde{x}_{i} ,\tilde{t}_{j} } \right){;}w_{ij} = w_{i} \times w_{j} \, ; \quad i,j = \overline{0,n} . $$
(14)
Here, \({\text{A}}^{T} = \left[ {a_{ij} } \right]_{i,j = 0}^{n}\) and \({\text{B}}^{T} = \left[ {b_{ij} } \right]_{i,j = 0}^{n}\) are \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrices whose entries \(a_{ij}\) and \(b_{ij}\) can be calculated by
$$ a_{ij} = \frac{{\varpi_{i}^{\left( j \right)} \left( 0 \right)}}{j!}{, }b_{ij} = \frac{{\tilde{\varpi }_{i}^{\left( j \right)} \left( 0 \right)}}{j!} \, \forall i,j = \overline{0,n} . $$
(15)
Moreover, substituting \(\tilde{k}_{n,n} \left( {x,t} \right)\) given by (13) and \(\tilde{u}_{n} \left( t \right)\) given by (9) into the right side of (6), we obtain \(\tilde{u}_{n} \left( x \right)\) in the following matrix form:
$$ \tilde{u}_{n} \left( x \right) = \varphi \left( x \right){ + }\int\limits_{a}^{b} {{\text{X}}\left( x \right){\rm N}} \tilde{\rm X}\left( t \right){\rm P}{\text{U}}dt, $$
(16)
where the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\rm N}\) and the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \(\tilde{\rm X}\left( t \right)\) are defined by
$$ {\rm N} = {\text{AKB}},\tilde{\rm X}\left( t \right) = {\rm X}^{T} \left( t \right){\rm X}\left( t \right) = \left[ {t^{i + j} } \right]_{i,j = 0}^{n} . $$
(17)
By integrating the right side of (16), we obtain
$$ \tilde{u}_{n} \left( x \right) = \varphi \left( x \right){ + }{\rm X}\left( x \right){\rm N}{\rm H}{\rm P}{\text{U}}{.} $$
(18)
Here, the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\rm H}\) is given by
$$ {\rm H} = \int\limits_{a}^{b} {\tilde{\rm X}\left( t \right)dt} = \left[ {h_{ij} } \right]_{i,j = 0}^{n}; \quad h_{ij} = \int\limits_{a}^{b} {t^{i + j} dt} = \left. {\frac{{t^{i + j + 1} }}{i + j + 1}} \right|_{a}^{b} = \frac{{b^{i + j + 1} - a^{i + j + 1} }}{i + j + 1}; \quad i,j = \overline{0,n} . $$
(19)
Furthermore, by replacing \(\tilde{u}_{n} \left( x \right)\) defined by (9) with \(u\left( x \right)\) on the left side of (6) and replacing \(\tilde{k}_{n,n} \left( {x,t} \right)\tilde{u}_{n} \left( t \right)\) with \(u\left( t \right)k\left( {x,t} \right)\) on the right side, we obtain
$$ {\rm X}\left( x \right){\rm N}{\rm H}{\rm P}{\text{U}} - {\rm X}\left( x \right){\rm N}{\rm H}{\rm N}{\rm H}{\rm P}{\text{U}}={\rm X}\left( x \right){\rm N}{\rm H}{\rm P}\Phi . $$
(20)
Simplifying (20) yields the linear algebraic system
$$ \left( {{\text{I}} - {\rm N}{\rm H}} \right){\rm P}{\text{U }}={\rm P}\Phi . $$
(21)
By applying any direct method, we can solve system (21) to obtain the unknown coefficient column matrix \({\text{U}}\):
$$ {\text{U }} = {\rm P}^{ - 1} {\rm M}^{ - 1} {\rm P}\Phi ; \quad {\rm M} = \left( {{\text{I}} - {\rm N}{\rm H}} \right). $$
(22)
Accordingly, the interpolant solution that was given by (9) then takes the simple matrix form
$$ \tilde{u}_{n} \left( x \right) = {\rm X}\left( x \right){\rm P}{\rm P}^{ - 1} {\rm M}^{ - 1} {\rm P}\Phi = {\text{X}}\left( x \right)\Omega , $$
(23)
where \(\Omega\) is the \(\left( {n + 1} \right) \times 1\) column matrix
$$ \Omega = {\rm M}^{ - 1} {\rm P}\Phi = \left[ {\gamma_{i} } \right]_{i = 0}^{n} . $$
(24)
The entries \(\left\{ {\gamma_{i} } \right\}_{i = 0}^{n}\) of \(\Omega\) can be easily calculated from the product of the multiplied three known coefficient matrices \({\rm M}^{ - 1} {\rm P}\Phi\). Hence, the interpolant polynomial solution of the considered integral Eq. (6) is given by
$$ \tilde{u}_{n} \left( x \right) = \sum\limits_{i = 0}^{n} {\gamma_{i} x^{i} } ;\quad a \le x \le b. $$
(25)
The second technique
We choose the two sets of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) and \(\left\{ {\tilde{t}_{j} } \right\}_{j = 0}^{n}\); each \(\left( {n + 1} \right)\) equally spaced distinct node corresponds to the two variables \(x,t\). These sets of nodes are distributed on the whole domain \(\left[ {a,b} \right]\) and never come outside. Based on these two sets of nodes that depend on step sizes \(h_{1} ,h_{2}\), which by extension depend on some positive numbers \(\delta_{1} \ge 0, \, \delta_{2} \ge 0\), we define
$$ h_{1} = \frac{{\left( {b - \delta_{1} } \right) - \left( {a + \delta_{1} } \right)}}{n}, \,h_{2} = \frac{{\left( {b - \delta_{2} } \right) - \left( {a + \delta_{2} } \right)}}{n}, $$
(26)
and
$$ x_{i} = \left( {a + \delta_{1} } \right) + ih_{1},\quad t_{j} = \left( {a + \delta_{2} } \right) + jh_{2} ; \quad i,j = \overline{0,n} . $$
(27)
Based on the modified matrix forms (2)–(5), we obtain \(\tilde{u}_{n} \left( x \right)\) and \(\tilde{f}_{n} \left( x \right)\) in the form
$$ \tilde{u}_{n} \left( x \right) = \Psi \left( x \right){\text{WU}}, \quad \tilde{f}_{n} \left( x \right) = \Psi \left( x \right){\text{WF}}{.} $$
(28)
The kernel \(k\left( {x,t} \right)\) is now interpolated twice; the first interpolation is performed with respect to the argument \(x\), whereas the second interpolation is performed with respect to the argument \(t\) in inverse matrix orders. Thus, we obtain the modified barycentric double interpolant kernel \(\tilde{k}_{n,n} \left( {x,t} \right)\) in the form
$$ \tilde{k}_{n,n} \left( {x,t} \right) = \Psi \left( x \right){\rm K}{\rm N}^{T} \left( t \right). $$
(29)
Here, \({\rm N}^{T} \left( t \right) = \left[ {n_{j} \left( t \right)} \right]_{j = 0}^{n}\) is the \(\left( {n + 1} \right) \times 1\) column matrix of the barycentric functions \(n_{j} \left( x \right)\), where
$$ n_{j} \left( t \right) = \frac{{\xi_{i} \left( t \right)}}{\varphi \left( t \right)},\xi_{j} \left( t \right) = \frac{1}{{t - t_{j} }},\varphi \left( t \right) = \sum\limits_{j = 0}^{n} {w_{j} \, \xi_{j} \left( x \right)} \, ; \, w_{j} = \left( { - 1} \right)^{j} \left( {\begin{array}{*{20}c} n \\ j \\ \end{array} } \right){ ,} $$
(30)
and the known square matrix \({\rm K}\) is given by
$$ {\rm K} = \left[ {k_{ij} } \right]_{i,j = 0}^{n} ; \quad k_{ij} = w_{ij} \, k\left( {x_{i} ,t_{j} } \right);\quad w_{ij} = w_{i} \times w_{j} . $$
(31)
By virtue of Eqs. (28) and (29), the product of the single interpolant unknown function \(\tilde{u}_{n} \left( t \right)\) by the double interpolated kernel \(k_{n,n} \left( {x,t} \right)\) can be replaced by the following matrix form:
$$ k_{n,n} \left( {x,t} \right)u_{n} \left( t \right) = \Psi \left( x \right){\rm K}{\rm N}^{T} \left( t \right)\Psi \left( t \right){\text{WU }} = \Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\text{WU; }}\tilde{\Psi }\left( t \right) = {\rm N}^{T} \left( t \right)\Psi \left( t \right). $$
(32)
Now, replacing \(k\left( {x,t} \right)u\left( t \right)\) in the right side of (6) with \(k_{n,n} \left( {x,t} \right)u_{n} \left( t \right)\) given by (31), we obtain \(\tilde{u}_{n} \left( x \right)\) in the form
$$ \tilde{u}_{n} \left( x \right) = f\left( x \right) + \Psi \left( x \right){\rm K}\Phi {\text{WU; }}\Phi = \int\limits_{a}^{b} {\tilde{\Psi }\left( t \right)dt} . $$
(33)
Moreover, by replacing the matrix–vector single interpolant \(\tilde{u}_{n} \left( x \right)\) that was given by (28) into both sides of (6), replacing the matrix–vector double interpolated kernel for \(k_{n,n} \left( {x,t} \right)\) that was given by (29) with \(k\left( {x,t} \right)\), and replacing \(f\left( t \right)\) with \(\tilde{f}_{n} \left( t \right)\) that was given by (28), we find that
$$ \Psi \left( x \right){\rm K}\Phi {\text{WU}} - \int\limits_{a}^{b} {\Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\rm K}\Phi {\text{WU}}dt} = \int\limits_{a}^{b} {\Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\text{WF}}dt} . $$
(34)
Simplifying Eq. (34) yields
$$ \Psi \left( x \right){\rm K}\Phi {\text{WU}} - \Psi \left( x \right){\rm K}\Phi {\rm K}\Phi {\text{WU}} = \Psi \left( x \right){\rm K}\Phi {\text{WF}}{.} $$
(35)
From this equation, we can find the required unknown coefficients matrix \({\text{U}} = \left( {{\text{W}} - {\rm K}\Phi {\text{W}}} \right)^{ - 1} {\text{WF}}\); by substituting into (28), we obtain the matrix–vector single interpolant \(\tilde{u}_{n} \left( x \right)\)
$$ \tilde{u}_{n} \left( x \right) = \Psi \left( x \right){\text{W}}\left( {{\text{W}} - {\rm K}\Phi {\text{W}}} \right)^{ - 1} {\text{WF}}{.} $$
(36)