Skip to main content
  • Research article
  • Open access
  • Published:

Computational method for solving weakly singular Fredholm integral equations of the second kind using an advanced barycentric Lagrange interpolation formula

Abstract

In this study, we applied an advanced barycentric Lagrange interpolation formula to find the interpolate solutions of weakly singular Fredholm integral equations of the second kind. The kernel is interpolated twice concerning both variables and then is transformed into the product of five matrices; two of them are monomial basis matrices. To isolate the singularity of the kernel, we developed two techniques based on a good choice of different two sets of nodes to be distributed over the integration domain. Each set is specific to one of the kernel arguments so that the kernel values never become zero or imaginary. The significant advantage of thetwo presented techniques is the ability to gain access to an algebraic linear system equivalent to the interpolant solution without applying the collocation method. Moreover, the convergence in the mean of the interpolant solution and the maximum error norm estimation are studied. The interpolate solutions of the illustrated four examples are found strongly converging uniformly to the exact solutions.

Introduction

The solutions of the initial, boundary, or mixed value problems have become common to be obtained through the integral equation method. This technique converts the solution of the value problems to the solutions of certain equivalent boundary integral equations of specified types and kinds. One of the common equivalent boundary integral equations is‏ ‏the weakly singular Fredholm integral equations of the second kind. These equations appear in many engineering fields, such as radiation, potential theory, scattering, electromagnetism, and other scientific fields [1,2,3,4,5]. The singularities of the integral equation are due to the singular kernel or the singular unknown function or the singularity of both. For example, the Dirichlet boundary value problems for the Laplace equation for an open arc in the plane is predominantly reduced to the solution of a weakly singular Fredholm integral equation of the first kind whose unknown function is singular at the endpoints of the integration domain and has weakly logarithmic kernels [6,7,8,9,10,11,12,13,14]. Dmitriev et al. [6] provided an iterative method for solving the Fredholm integral equation of the first kind with a weakly singular logarithmic kernel and a nonsingular unknown function. Shoukralla [7, 8] presented two methods for the solution of the logarithmic singular kernel Fredholm integral equation with a singular unknown function. The kernel singularity is isolated analytically, depending on the Kantorovich technique, and the unknown functions are approximated on the basis of the Taylor and Chebyshev polynomials with an analytical treatment of the singularity. These techniques provided acceptable solutions at that time, regardless of the difficulty and the complexity of the two procedures. Shoukralla [9, 10] solved the same integral equation on the basis of Chebyshev polynomials of the second kind, thus providing high-accuracy results.

Shoukralla et al. [11,12,13] solved a certain class of singular Fredholm integral equation of the first kind with singular logarithmic kernels and singular unknown functions on the basis of monic and economized monic Chebyshev functions with different approaches for removing the singularities. In this study, we focus on the numerical solution of the second kind of Fredholm equations with another type of kernel’s singularity, which needs another technique different from those for the singular logarithmic kernels of Fredholm equations of the first kind. Many methods for solving weakly singular Fredholm integral equations of the second kind have been published [14,15,16,17,18,19,20,21,22,23,24,25]. For example, Yin et al. [14], used the Jacobi–Gauss quadrature formula to approximate the integral operator in the numerical implementation of the spectral collocation method and established the spectral Chebyshev collocation method for solving Fredholm integral equations of the second kind with the weakly singular kernel. This method shows that the errors of the approximate solution decay exponentially in infinity and weighted norms. Behzadi et al. [15] developed some modifications on the generalization of the Euler–Maclaurin summation formula by using Bernoulli functions to construct such generalized quadrature and to construct a numerical method based on the trapezoidal rule for solving weakly singular integral equations.

In this study, we make progress toward the application of some advanced single and double barycentric Lagrange interpolation formulas and how to adapt them to be applicable to completely isolating the kernel singularity and find accurate solutions of the weakly singular Fredholm equations of the second kind. Shoukralla et al. [26,27,28,29] developed a new version of the traditional barycentric Lagrange interpolation [30] and applied it successfully to solve linear and nonsingular Volterra integral equations of the second kind. For weakly singular Fredholm equations of the second kind, the matter is more complicated due to some difficulties related to the singularity of the kernel.

This study focuses on the application of two advanced barycentric interpolation formulas to solve the weakly singular Fredholm integral equation of the second kind with two innovative techniques for the treatment of the kernel’s singularity depending on the perfect choice of the node distribution rules. Naturally, our primary aim is to reduce computation complexity, but whether or how fast the interpolant solution converges to the exact one is at least as important.

One of the advantages of the presented techniques is the designed rules for the distribution of nodes so that they always remain within the domain of the integration and never be outside. Moreover, these rules are designed so that the difference between the nodes subjected to the two variables of the kernel remains always positive. Thus, based on this idea, the numerical solutions become stable [31] on the whole interval even at the endpoints, as shown in the solved examples.

We begin by interpolating the unknown and data functions by using the advanced single matrix-form barycentric interpolate polynomials; each is expressed through four matrices, and one of which is the monomial basis functions matrix [32]. The weakly singular kernel is then interpolated twice concerning its two arguments; the first interpolation concerning the first variable of a positive sign with node distribution on the right half of the integration domain, and the second interpolation concerning the variable of negative sign and the distribution of the nodes will be held on the other left half of the integration domain. This generous scheme ensures that the difference between the kernel’s two variables always remains positive, thus completely erasing the singularity of the kernel and expressing it through five matrices; two of which are monomial basis matrices.

Additional advantages of these techniques are not only to simplify the calculations but also to gain access to an equivalent linear system of algebraic equations without applying the collocation method. The implementation of this idea is achieved by substituting the interpolant unknown function on both sides of the integral equation. By solving the obtained algebraic linear system directly, the unknown coefficient matrix can be found, consequently finding the interpolant unknown function. The six examples are solved by using the two presented techniques; examples 1–4 are for weakly singular equations, while examples 5–6 are for linear nonsingular equations. The first three examples are also solved by a trapezoidal approach mentioned in [14], whereas the fourth boundary integral equation, which arises from the problem of radiation, potential theory, scattering theory, electromagnetism, and hydrodynamics, is solved in [15]. The solutions of examples 1–4 which are obtained by the two presented techniques are found to strongly converge to the exact solutions compared with [14, 15]. The solutions to examples 5–6 which are obtained by the two presented techniques are found to be equal to the exact solutions. The given tables and graphs demonstrate the originality, eligibility, and accuracy of the presented new method.

Advanced barycentric Lagrange interpolation formula

The question of expressing a given function by an interpolant is vital in approximation theory, as well as in computational methods. A remarkable advantage of Lagrange interpolation is its independence of the arrangement of the selected nodes, although the efficient results require a few nodes. By contrast, the increasing number of nodes leads to complicate the scheme and the instability of the numerical solution. That is, the barycentric Lagrange interpolation is a fantastic formula to increase the performance of traditional Lagrange interpolation. In this section, we provide a different mathematical formula in form and content that exceeds the well-known traditional barycentric Lagrange formula. The new formula is expressed through matrices; one of them is the monomial basis matrix, which is canceled in the solution’s procedure. Thus, the steps of the solution are reduced, the round-off error is minimized, and high-precision solutions are provided.

Let the function \(f\left( x \right)\) be defined on \(\left[ {a,b} \right]\) as the tabulated function \(f\left( {x_{i} } \right) = f_{i} \, ; \, i = \overline{0,n}\) for the set of \(\left( {n + 1} \right)\) equally spaced distinct nodes \(\left\{ {x_{i} } \right\}_{i = 0}^{n}\) such that \(x_{i} = a + ih\), where the step size \(h\) is defined by \(h = \frac{b - a}{n}\). Then, Berrut et al. [14] provided the barycentric Lagrange interpolating polynomial of degree \(n\), \(\tilde{f}_{n} \left( x \right)\), which interpolates the tabulated function \(f\left( {x_{i} } \right) = f_{i}\) such that \(\tilde{f}_{n} \left( {x_{i} } \right) = f\left( {x_{i} } \right) = f_{i}\) in the following form:

$$ \tilde{f}_{n} \left( x \right) = {{\sum\limits_{i = 0}^{n} {\frac{{w_{i} }}{{x - x_{i} }}} f_{i} } \mathord{\left/ {\vphantom {{\sum\limits_{i = 0}^{n} {\frac{{w_{i} }}{{x - x_{i} }}} f_{i} } {\sum\limits_{i = 0}^{n} {\frac{{w_{i} }}{{x - x_{i} }}} }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{i = 0}^{n} {\frac{{w_{i} }}{{x - x_{i} }}} }};\quad w_{i} = \left( { - 1} \right)^{i} \left( {\begin{array}{*{20}c} n \\ i \\ \end{array} } \right). $$
(1)

Although Formula (1) is simpler than the traditional Lagrange interpolating polynomial [14], it is still difficult to apply for interpolating the unknown functions, as well as the kernels of integral equations of any types and kinds because it hinders the facilitation of the steps of the solution and causes some computational impediments. Therefore, we adapt this formula before using it so that it becomes easier to apply for solving integral equations. Using some operational matrix algebra, we can increase the computational efficiency of Formula (1) and achieve an improved matrix formula by expanding the numerator and distributing it on the denominator by separating the barycentric weights \(w_{i}\). Thus, we obtain \(\tilde{f}_{n} \left( x \right)\) in the modified matrix form

$$ \tilde{f}_{n} \left( x \right) = \Psi \left( x \right){\text{WF}}{.} $$
(2)

Here, \(\Psi \left( x \right)\) is the \(1 \times \left( {n + 1} \right)\) row matrix, \({\text{W = diag}}\left\{ {w_{0} ,w_{1} ,..,w_{n} } \right\}\) is the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square diagonal matrix whose entries \(w_{i}\) are defined by (1), and \({\text{F}}^{T} { = }\left[ {f_{i} } \right]_{i = 0}^{n}\) is \(\left( {n + 1} \right) \times 1\) column matrix whose entries \(f_{i}\) are the functional values of \(f\left( x \right)\) such that

$$ \Psi \left( x \right) = \left[ {\psi_{i} \left( x \right)} \right]_{i = 0}^{n} ;\quad \psi_{i} \left( x \right) = \frac{{\xi_{i} \left( x \right)}}{\phi \left( x \right)}, \, \quad \phi \left( x \right) = \sum\limits_{i = 0}^{n} {w_{i} \xi_{i} \left( x \right){ , }\quad } \xi_{i} \left( x \right) = \frac{1}{{x - x_{i} }};\quad \, i = \overline{0,n} . $$
(3)

By studying the behavior of matrix Formula (2), we can perform analysis so that the so-called monomial matrix is separated. This idea can be implemented by extracting the coefficients of the barycentric functions of the matrix \(\Psi \left( x \right)\). Moreover, the numerator and the denominator have common factors, and these factors annihilate each other. This idea gives us the incentive to rearrange the terms of each barycentric function in the ascending power of \(x\) or simply expand each function into a Maclaurin polynomial in a matrix form of multiplied two matrices; one of which is the monomial basis matrix, and the other is the Maclaurin coefficient. On the basis of this idea, Formula (2) is expressed via four matrices as follows:

$$ \tilde{f}_{n} \left( x \right) = {\rm X}\left( x \right){\text{CWF,}} $$
(4)

where the \(1 \times \left( {n + 1} \right)\) monomial basis row matrix \({\text{X}}\left( x \right)\) and the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square Maclaurin coefficient matrix \({\text{C}}\) are defined by

$$ {\rm X}\left( x \right) = \left[ {x^{i} } \right]_{i = 0}^{n} {,}\quad {\text{C}}^{T} = \left[ {c_{ij} } \right]_{i,j = 0}^{n} ;\quad c_{ij} = \frac{{\psi_{i}^{\left( j \right)} \left( 0 \right)}}{j!} \, \forall i,\quad j = \overline{0,n} . $$
(5)

Thus, we have derived a simple and magnificent matrix formula for raising the computational efficiency of the traditional barycentric Lagrange interpolation (1) that can be easily applicable to find the interpolant polynomial of any function \(f\left( x \right)\) defined on the interval \(\left[ {a,b} \right]\). We name the right-hand side of (4) “advanced barycentric Lagrange single interpolation formula.” Applying Formula (4) for interpolating the data and unknown functions of integral equations remarkably reduces the solution steps due to some operational matrix abbreviations, considerably contributing to the reduction in round-off errors and saving time. Now, we apply the new Formula (4) to solve weakly singular Fredholm integral equations of the second kind.

Advanced barycentric interpolation formulas for solving weakly singular fredholm integral equations of the second kind

Here, we present two new techniques for solving weakly singular Fredholm integral equations of the second kind. This method starts by interpolating the unknown and data functions using Formula (4). As for the kernel, we use Formula (4) twice to obtain a double interpolant polynomial through five matrices. In this manner, we provide two techniques for choosing the distribution nodes of the two main variables \(x\) and \(t\) of the kernel. In the first technique, the \(x{ - }\)nodes are distributed on the right half of the integration domain, whereas the \(t{ - }\)nodes are distributed on the left half. The step sizes for the two sets of nodes depend on some real numbers \(\delta_{1} ,\delta_{2} \ge 0\) that depend on the degree of the interpolation degrees. In the second technique, we present two different sets of node distributions corresponding to two variables, all of which are distributed on the entire integration domain. Consider the weakly singular Fredholm integral equation of the second kind.

$$ u\left( x \right) = \varphi \left( x \right) + \int\limits_{a}^{b} {k\left( {x,t} \right)u\left( t \right)dt} ;\quad {\text{a}} \le x \le b, $$
(6)

where \(\varphi \left( x \right)\) is a given function, and \(u\left( x \right)\) is the unknown function defined on \({\text{L}}^{2} \left[ {a,b} \right]\). Here, the given kernel \(k\left( {x,t} \right)\) takes the form \(k\left( {x,t} \right) = \frac{1}{{\left| {x - t} \right|^{\alpha } }}\); \(0 < \alpha < 1\). Moreover, \(\mathop {\max }\limits_{{x,t \in \left[ {a,b} \right]}} \left| {k\left( {x,t} \right)} \right| \le N\), \(\mathop {\max }\limits_{{x \in \left[ {a,b} \right]}} \left| {\varphi \left( x \right)} \right| \le M\), \(\mathop {\max }\limits_{{x \in \left[ {a,b} \right]}} \left| {u\left( x \right)} \right| \le L\) for \(N,M,L\) are assumed to be real numbers.

The first technique

Let \(\tilde{\varphi }_{n} \left( x \right)\) be the single interpolant polynomial that interpolates \(\varphi \left( x \right)\) of (6) on the basis of Formula (4) such that \(\tilde{\varphi }_{n} \left( x \right) \approx \varphi \left( x \right)\) and \(\tilde{\varphi }_{n} \left( {x_{i} } \right) = \varphi \left( {x_{i} } \right)\) for the set of equidistant nodes \(\left\{ {x_{i} } \right\}_{i = 0}^{n} ;x_{i} = a + ih,h = \frac{b - a}{n}\). By using the new Formula (4), \(\varphi \left( x \right)\) can be replaced by its interpolant polynomial \(\tilde{\varphi }_{n} \left( x \right)\) of degree \(n\) in the matrix form

$$ \tilde{\varphi }_{n} \left( x \right) = {\rm X}\left( x \right){\text{CW}}\Phi = {\rm X}\left( x \right){\rm P}\Phi ; \, {\rm P}{\text{ = CW,}} $$
(7)

where \({\rm P}{\text{ = CW}}\) is the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix, and \(\Phi\) is the \(\left( {n + 1} \right) \times 1\) column matrix such that

$$ {\rm P}{\text{ = CW,}}\quad {\rm P}^{T} { = }\left[ {p_{ij} } \right]_{i,j = 0}^{n} ;\quad p_{ij} = c_{ij} w_{i} ;\quad i,j = \overline{0,n}, \quad\Phi^{T} = \left[ {\varphi_{i} } \right]_{I = 0}^{N}; \quad \varphi_{i} = \varphi \left( {x_{i} } \right); \quad i = \overline{0,n} , $$
(8)

and \(c_{ij}\) are calculated by (5). Similarly, the unknown function \(u\left( x \right)\), as well as \(\varphi \left( x \right)\), can be interpolated to obtain its unknown single interpolant polynomial \(\tilde{u}_{n} \left( x \right)\) in the following matrix form:

$$ \tilde{u}_{n} \left( x \right) = {\rm X}\left( x \right){\rm P}{\text{U,}} $$
(9)

where \({\text{U }} = \left[ {u_{i} } \right]_{i = 0}^{n}\) is the \(\left( {n + 1} \right) \times 1\) unknown coefficient column matrix to be determined, where the entries \(\left\{ {u_{i} } \right\}_{i = 0}^{n}\) are the undetermined coefficients of the unknown single interpolant polynomial.

Consequently, for the weakly singular kernel \(k\left( {x,t} \right) = \frac{1}{{\left| {x - t} \right|^{\alpha } }}\), which is singular when \(x \to t\), we interpolate it twice; the first interpolation is performed with respect to \(x\), and the second is performed with respect to \(t\) so that we can obtain the double interpolant polynomial \(\tilde{k}_{n,n} \left( {x,t} \right)\) of two variables \(x\) and \(t\). The mathematical properties of the kernel force us to design an innovative new technique that has the potential to remove this singularity. This goal can only be achieved under the important and necessary condition that \(x > t\). Thus, we adopt an approach based on the appropriate choice of two different sets of nodes; the first set \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) is distributed on the right-half interval of the integration domain \(\left[ {\frac{b - a}{2},b} \right]\), and the second set of nodes \(\left\{ {\tilde{t}_{i} } \right\}_{i = 0}^{n}\) is distributed on the left-half interval \(\left[ {a,\frac{b - a}{2}} \right]\). This yields two barycentric function summations \(\rho \left( x \right),\tilde{\rho }\left( x \right)\); the first summation \(\rho \left( x \right)\) corresponds to the set of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) and the barycentric functions \(\varpi_{i} \left( x \right) = \frac{{\zeta_{i} \left( x \right)}}{\rho \left( x \right)}\) for \(\zeta_{i} \left( x \right) = \frac{1}{{x - \tilde{x}_{i} }}\), whereas the second barycentric function summation \(\tilde{\rho }\left( x \right)\) corresponds to the set of nodes \(\left\{ {\tilde{t}_{i} } \right\}_{i = 0}^{n}\) and the barycentric functions \(\tilde{\varpi }_{i} \left( t \right) = \frac{{\tilde{\zeta }_{i} \left( t \right)}}{{\tilde{\rho }\left( t \right)}}\) for \(\tilde{\zeta }_{i} \left( t \right) = \frac{1}{{t - \tilde{t}_{i} }}\). We define \(\tilde{x}_{i}\) and \(\tilde{t}_{i}\) as follows:

$$ \tilde{x}_{i} = a + 0.5 + ih_{1} ; \quad h_{1} = \frac{{b - a - 4\delta_{1} }}{2n}, \quad \tilde{t}_{i} = a + ih_{2} ; \quad h_{2} = \frac{{b - a - 4\delta_{2} }}{2n};\quad i = \overline{0,n} . $$
(10)

We choose \(\delta_{1} ,\delta_{2} \ge 0\) such that \(\frac{b - a}{2} < h_{1} < b\) and \(a < h_{2} < \frac{b - a}{2}\). Moreover, we put \(h_{2} = \frac{{b - 3a - 4\delta_{2} }}{n + 0.1}\) for the kernel of the form \(\left| {1 - t} \right|^{ - 1/2}\), that is, if \(x = 1.\) The two summations \(\rho \left( x \right),\tilde{\rho }\left( x \right)\) are defined by

$$ \rho \left( x \right) = \sum\limits_{i = 0}^{n} {w_{i} \zeta_{i} \left( x \right)} ; \, \tilde{\rho }\left( t \right) = \sum\limits_{i = 0}^{n} {w_{i} \tilde{\zeta }_{i} \left( t \right)} {; }\zeta_{i} \left( x \right) = \frac{1}{{x - \tilde{x}_{i} }},\tilde{\zeta }_{i} \left( t \right) = \frac{1}{{t - \tilde{t}_{i} }}. $$

By using the same strategy used to drive Formula (4), the kernel \(k\left( {x,t} \right)\) can be interpolated using the set of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) via four matrices as follows:

$$ \tilde{k}_{n,n} \left( {x,t} \right) = {\rm X}\left( x \right){\text{CW}}{\rm K}\left( {\tilde{x}_{i} ,t} \right), $$
(11)

where \({\rm K}\left( {\tilde{x}_{i} ,t} \right)\) is the column matrix such that

$$ {\rm K}^{T} \left( {\tilde{x}_{i} ,t} \right) = \left[ {\begin{array}{*{20}c} {k\left( {\tilde{x}_{0} ,t} \right)} & {k\left( {\tilde{x}_{1} ,t} \right)} & {k\left( {\tilde{x}_{2} ,t} \right)} & {...} & {k\left( {\tilde{x}_{n} ,t} \right)} \\ \end{array} } \right]. $$
(12)

In the same context, we again interpolate each function \(k\left( {\tilde{x}_{i} ,t} \right)\) for \(i = \overline{0,n}\) by using the set of nodes \(\left\{ {\tilde{t}_{j} } \right\}_{j = 0}^{n}\). After strenuous substitution and abbreviations, which are performed using some matrix operations, we obtain the kernel through five matrices; two of which are the monomial basis function matrices, that is, the row monomial basis function matrix \({\rm X}\left( x \right)\) subjected to \(x\) and the column monomial basis function matrix \({\rm X}^{T} \left( t \right)\) subjected to \(t\). Thus, we obtain the advanced barycentric double interpolant polynomial \(\tilde{k}_{n,n} \left( {x,t} \right)\) via five matrices as follows:

$$ \tilde{k}_{n,n} \left( {x,t} \right) = {\rm X}\left( x \right){\text{AKB}}{\rm X}^{T} \left( t \right), $$
(13)

where the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\text{K}}\) is calculated as follows:

$$ {\text{K}} = \left[ {w_{ij} k_{ij} } \right]_{i,j = 0}^{n} ; \, k_{ij} = k\left( {\tilde{x}_{i} ,\tilde{t}_{j} } \right){;}w_{ij} = w_{i} \times w_{j} \, ; \quad i,j = \overline{0,n} . $$
(14)

Here, \({\text{A}}^{T} = \left[ {a_{ij} } \right]_{i,j = 0}^{n}\) and \({\text{B}}^{T} = \left[ {b_{ij} } \right]_{i,j = 0}^{n}\) are \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrices whose entries \(a_{ij}\) and \(b_{ij}\) can be calculated by

$$ a_{ij} = \frac{{\varpi_{i}^{\left( j \right)} \left( 0 \right)}}{j!}{, }b_{ij} = \frac{{\tilde{\varpi }_{i}^{\left( j \right)} \left( 0 \right)}}{j!} \, \forall i,j = \overline{0,n} . $$
(15)

Moreover, substituting \(\tilde{k}_{n,n} \left( {x,t} \right)\) given by (13) and \(\tilde{u}_{n} \left( t \right)\) given by (9) into the right side of (6), we obtain \(\tilde{u}_{n} \left( x \right)\) in the following matrix form:

$$ \tilde{u}_{n} \left( x \right) = \varphi \left( x \right){ + }\int\limits_{a}^{b} {{\text{X}}\left( x \right){\rm N}} \tilde{\rm X}\left( t \right){\rm P}{\text{U}}dt, $$
(16)

where the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\rm N}\) and the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \(\tilde{\rm X}\left( t \right)\) are defined by

$$ {\rm N} = {\text{AKB}},\tilde{\rm X}\left( t \right) = {\rm X}^{T} \left( t \right){\rm X}\left( t \right) = \left[ {t^{i + j} } \right]_{i,j = 0}^{n} . $$
(17)

By integrating the right side of (16), we obtain

$$ \tilde{u}_{n} \left( x \right) = \varphi \left( x \right){ + }{\rm X}\left( x \right){\rm N}{\rm H}{\rm P}{\text{U}}{.} $$
(18)

Here, the \(\left( {n + 1} \right) \times \left( {n + 1} \right)\) square matrix \({\rm H}\) is given by

$$ {\rm H} = \int\limits_{a}^{b} {\tilde{\rm X}\left( t \right)dt} = \left[ {h_{ij} } \right]_{i,j = 0}^{n}; \quad h_{ij} = \int\limits_{a}^{b} {t^{i + j} dt} = \left. {\frac{{t^{i + j + 1} }}{i + j + 1}} \right|_{a}^{b} = \frac{{b^{i + j + 1} - a^{i + j + 1} }}{i + j + 1}; \quad i,j = \overline{0,n} . $$
(19)

Furthermore, by replacing \(\tilde{u}_{n} \left( x \right)\) defined by (9) with \(u\left( x \right)\) on the left side of (6) and replacing \(\tilde{k}_{n,n} \left( {x,t} \right)\tilde{u}_{n} \left( t \right)\) with \(u\left( t \right)k\left( {x,t} \right)\) on the right side, we obtain

$$ {\rm X}\left( x \right){\rm N}{\rm H}{\rm P}{\text{U}} - {\rm X}\left( x \right){\rm N}{\rm H}{\rm N}{\rm H}{\rm P}{\text{U}}={\rm X}\left( x \right){\rm N}{\rm H}{\rm P}\Phi . $$
(20)

Simplifying (20) yields the linear algebraic system

$$ \left( {{\text{I}} - {\rm N}{\rm H}} \right){\rm P}{\text{U }}={\rm P}\Phi . $$
(21)

By applying any direct method, we can solve system (21) to obtain the unknown coefficient column matrix \({\text{U}}\):

$$ {\text{U }} = {\rm P}^{ - 1} {\rm M}^{ - 1} {\rm P}\Phi ; \quad {\rm M} = \left( {{\text{I}} - {\rm N}{\rm H}} \right). $$
(22)

Accordingly, the interpolant solution that was given by (9) then takes the simple matrix form

$$ \tilde{u}_{n} \left( x \right) = {\rm X}\left( x \right){\rm P}{\rm P}^{ - 1} {\rm M}^{ - 1} {\rm P}\Phi = {\text{X}}\left( x \right)\Omega , $$
(23)

where \(\Omega\) is the \(\left( {n + 1} \right) \times 1\) column matrix

$$ \Omega = {\rm M}^{ - 1} {\rm P}\Phi = \left[ {\gamma_{i} } \right]_{i = 0}^{n} . $$
(24)

The entries \(\left\{ {\gamma_{i} } \right\}_{i = 0}^{n}\) of \(\Omega\) can be easily calculated from the product of the multiplied three known coefficient matrices \({\rm M}^{ - 1} {\rm P}\Phi\). Hence, the interpolant polynomial solution of the considered integral Eq. (6) is given by

$$ \tilde{u}_{n} \left( x \right) = \sum\limits_{i = 0}^{n} {\gamma_{i} x^{i} } ;\quad a \le x \le b. $$
(25)

The second technique

We choose the two sets of nodes \(\left\{ {\tilde{x}_{i} } \right\}_{i = 0}^{n}\) and \(\left\{ {\tilde{t}_{j} } \right\}_{j = 0}^{n}\); each \(\left( {n + 1} \right)\) equally spaced distinct node corresponds to the two variables \(x,t\). These sets of nodes are distributed on the whole domain \(\left[ {a,b} \right]\) and never come outside. Based on these two sets of nodes that depend on step sizes \(h_{1} ,h_{2}\), which by extension depend on some positive numbers \(\delta_{1} \ge 0, \, \delta_{2} \ge 0\), we define

$$ h_{1} = \frac{{\left( {b - \delta_{1} } \right) - \left( {a + \delta_{1} } \right)}}{n}, \,h_{2} = \frac{{\left( {b - \delta_{2} } \right) - \left( {a + \delta_{2} } \right)}}{n}, $$
(26)

and

$$ x_{i} = \left( {a + \delta_{1} } \right) + ih_{1},\quad t_{j} = \left( {a + \delta_{2} } \right) + jh_{2} ; \quad i,j = \overline{0,n} . $$
(27)

Based on the modified matrix forms (2)–(5), we obtain \(\tilde{u}_{n} \left( x \right)\) and \(\tilde{f}_{n} \left( x \right)\) in the form

$$ \tilde{u}_{n} \left( x \right) = \Psi \left( x \right){\text{WU}}, \quad \tilde{f}_{n} \left( x \right) = \Psi \left( x \right){\text{WF}}{.} $$
(28)

The kernel \(k\left( {x,t} \right)\) is now interpolated twice; the first interpolation is performed with respect to the argument \(x\), whereas the second interpolation is performed with respect to the argument \(t\) in inverse matrix orders. Thus, we obtain the modified barycentric double interpolant kernel \(\tilde{k}_{n,n} \left( {x,t} \right)\) in the form

$$ \tilde{k}_{n,n} \left( {x,t} \right) = \Psi \left( x \right){\rm K}{\rm N}^{T} \left( t \right). $$
(29)

Here, \({\rm N}^{T} \left( t \right) = \left[ {n_{j} \left( t \right)} \right]_{j = 0}^{n}\) is the \(\left( {n + 1} \right) \times 1\) column matrix of the barycentric functions \(n_{j} \left( x \right)\), where

$$ n_{j} \left( t \right) = \frac{{\xi_{i} \left( t \right)}}{\varphi \left( t \right)},\xi_{j} \left( t \right) = \frac{1}{{t - t_{j} }},\varphi \left( t \right) = \sum\limits_{j = 0}^{n} {w_{j} \, \xi_{j} \left( x \right)} \, ; \, w_{j} = \left( { - 1} \right)^{j} \left( {\begin{array}{*{20}c} n \\ j \\ \end{array} } \right){ ,} $$
(30)

and the known square matrix \({\rm K}\) is given by

$$ {\rm K} = \left[ {k_{ij} } \right]_{i,j = 0}^{n} ; \quad k_{ij} = w_{ij} \, k\left( {x_{i} ,t_{j} } \right);\quad w_{ij} = w_{i} \times w_{j} . $$
(31)

By virtue of Eqs. (28) and (29), the product of the single interpolant unknown function \(\tilde{u}_{n} \left( t \right)\) by the double interpolated kernel \(k_{n,n} \left( {x,t} \right)\) can be replaced by the following matrix form:

$$ k_{n,n} \left( {x,t} \right)u_{n} \left( t \right) = \Psi \left( x \right){\rm K}{\rm N}^{T} \left( t \right)\Psi \left( t \right){\text{WU }} = \Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\text{WU; }}\tilde{\Psi }\left( t \right) = {\rm N}^{T} \left( t \right)\Psi \left( t \right). $$
(32)

Now, replacing \(k\left( {x,t} \right)u\left( t \right)\) in the right side of (6) with \(k_{n,n} \left( {x,t} \right)u_{n} \left( t \right)\) given by (31), we obtain \(\tilde{u}_{n} \left( x \right)\) in the form

$$ \tilde{u}_{n} \left( x \right) = f\left( x \right) + \Psi \left( x \right){\rm K}\Phi {\text{WU; }}\Phi = \int\limits_{a}^{b} {\tilde{\Psi }\left( t \right)dt} . $$
(33)

Moreover, by replacing the matrix–vector single interpolant \(\tilde{u}_{n} \left( x \right)\) that was given by (28) into both sides of (6), replacing the matrix–vector double interpolated kernel for \(k_{n,n} \left( {x,t} \right)\) that was given by (29) with \(k\left( {x,t} \right)\), and replacing \(f\left( t \right)\) with \(\tilde{f}_{n} \left( t \right)\) that was given by (28), we find that

$$ \Psi \left( x \right){\rm K}\Phi {\text{WU}} - \int\limits_{a}^{b} {\Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\rm K}\Phi {\text{WU}}dt} = \int\limits_{a}^{b} {\Psi \left( x \right){\rm K}\tilde{\Psi }\left( t \right){\text{WF}}dt} . $$
(34)

Simplifying Eq. (34) yields

$$ \Psi \left( x \right){\rm K}\Phi {\text{WU}} - \Psi \left( x \right){\rm K}\Phi {\rm K}\Phi {\text{WU}} = \Psi \left( x \right){\rm K}\Phi {\text{WF}}{.} $$
(35)

From this equation, we can find the required unknown coefficients matrix \({\text{U}} = \left( {{\text{W}} - {\rm K}\Phi {\text{W}}} \right)^{ - 1} {\text{WF}}\); by substituting into (28), we obtain the matrix–vector single interpolant \(\tilde{u}_{n} \left( x \right)\)

$$ \tilde{u}_{n} \left( x \right) = \Psi \left( x \right){\text{W}}\left( {{\text{W}} - {\rm K}\Phi {\text{W}}} \right)^{ - 1} {\text{WF}}{.} $$
(36)

Convergence and error analysis

In this section, we study the convergence in the mean [33, 34] of the interpolant unknown function \(\tilde{u}_{n} \left( x \right)\) described in first technique to the exact solution \(u\left( x \right)\).

Theorem 4.1

Assume \(u\left( x \right) \in {\rm X}\) is a sufficiently smooth exact solution of (6) such that \(\mathop {\max }\limits_{{x \in \left[ {a,b} \right]}} \left| {u\left( x \right)} \right| \le \varepsilon\). Then,

$$ \mathop {\lim }\limits_{n \to \infty } \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|_{2} = 0. $$
(37)

Proof

Let the exact solution \(u\left( x \right)\) be expanded into a Maclaurin series

$$ u\left( x \right) = \sum\limits_{s = 0}^{\infty } {\alpha_{s} x^{s} } . $$
(38)

Then, we have

$$ \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|_{2} = \int\limits_{a}^{b} {\left| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right|^{2} } dx = \int\limits_{a}^{b} {\left| {u\left( x \right)} \right|^{2} } dx + \int\limits_{a}^{b} {\left| {\tilde{u}_{n} \left( x \right)} \right|^{2} } dx - 2\int\limits_{a}^{b} {\left| {u\left( x \right)\tilde{u}_{n} \left( x \right)} \right|} dx. $$
(39)

Here, we find from Eq. (25) that

$$ \begin{aligned} \int\limits_{a}^{b} {\left| {\tilde{u}_{n} \left( x \right)} \right|^{2} } dx & = \int\limits_{a}^{b} {\left| {\sum\limits_{i = 0}^{n} {\gamma_{i} x^{i} } \sum\limits_{j = 0}^{n} {\gamma_{j} x^{j} } } \right|} dx = \sum\limits_{i = 0}^{n} {\sum\limits_{j = 0}^{n} {\gamma_{i} \gamma_{j} } } \int\limits_{a}^{b} {x^{i + j} dx} \hfill \\ & = \sum\limits_{i = 0}^{n} {\sum\limits_{j = 0}^{n} {\frac{{\gamma_{i} \gamma_{j} }}{i + j + 1}} } \left( {b^{i + j + 1} - a^{i + j + 1} } \right). \hfill \\ \end{aligned} $$
(40)

From (38), we obtain

$$ \begin{aligned} \int\limits_{a}^{b} {\left| {u\left( x \right)} \right|^{2} } dx &= \int\limits_{a}^{b} {\left| {\sum\limits_{s = 0}^{\infty } {\alpha_{s} x^{s} } \sum\limits_{k = 0}^{\infty } {\alpha_{k} x^{k} } } \right|} dx = \sum\limits_{s = 0}^{\infty } {\sum\limits_{k = 0}^{\infty } {\alpha_{s} \alpha_{k} } } \int\limits_{a}^{b} {x^{s + k} dx} \hfill \\ & = \sum\limits_{s = 0}^{\infty } {\sum\limits_{k = 0}^{\infty } {\frac{{\alpha_{s} \alpha_{k} }}{s + k + 1}} } \left( {b^{s + k + 1} - a^{s + k + 1} } \right). \hfill \\ \end{aligned} $$
(41)

From the Schwarz inequity, we have

$$ \begin{aligned} \int\limits_{a}^{b} {\left| {\tilde{u}_{n} \left( x \right)u\left( x \right)} \right|} dx & \le \left[ {\int\limits_{a}^{b} {\left| {\tilde{u}_{n} \left( x \right)} \right|^{2} } dx} \right]^{\frac{1}{2}} \times \left[ {\int\limits_{a}^{b} {\left| {u\left( x \right)} \right|^{2} } dx} \right]^{\frac{1}{2}} \hfill \\ & = \left[ {\sum\limits_{i = 0}^{n} {\sum\limits_{j = 0}^{n} {\frac{{\gamma_{i} \gamma_{j} }}{i + j + 1}} } \left( {b^{i + j + 1} - a^{i + j + 1} } \right)} \right]^{\frac{1}{2}} \times \left[ {\sum\limits_{s = 0}^{\infty } {\sum\limits_{k = 0}^{\infty } {\frac{{\alpha_{s} \alpha_{k} }}{s + k + 1}} } \left( {b^{s + k + 1} - a^{s + k + 1} } \right)} \right]^{\frac{1}{2}} . \hfill \\ \end{aligned} $$
(42)

By substituting Eqs. (32), (33), and (34) into Eq. (31) on the basis of \(s,k \to \infty\), we have proven that

$$ \begin{aligned} \mathop {\lim }\limits_{n \to \infty } \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|_{2} & = \left[ {\sum\limits_{s = 0}^{\infty } {\sum\limits_{k = 0}^{\infty } {\frac{{\alpha_{s} \alpha_{k} }}{s + k + 1}} } + \sum\limits_{i = 0}^{n} {\sum\limits_{j = 0}^{n} {\frac{{\gamma_{i} \gamma_{j} }}{i + j + 1}} } } \right]\left( {b^{i + j + 1} - a^{i + j + 1} } \right) \hfill \\ & \quad - 2\left[ {\sum\limits_{i = 0}^{n} {\sum\limits_{j = 0}^{n} {\frac{{\gamma_{i} \gamma_{j} }}{i + j + 1}} } \left( {b^{i + j + 1} - a^{i + j + 1} } \right)} \right]^{\frac{1}{2}} \times \left[ {\sum\limits_{s = 0}^{\infty } {\sum\limits_{k = 0}^{\infty } {\frac{{\alpha_{s} \alpha_{k} }}{s + k + 1}} } \left( {b^{s + k + 1} - a^{s + k + 1} } \right)} \right]^{\frac{1}{2}} = 0. \hfill \\ \end{aligned} $$
(43)

The goal now is to estimate the error of interpolation. We Sample the total error of the approximation by \(\varepsilon_{n} \left( x \right)\). Thus, \(\varepsilon_{n} \left( x \right) = \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|_{2}\), where \(\left\| { \, {. }} \right\|_{2}\) denotes the Euclidean norm in \({\mathbb{R}}^{2}\).□

Theorem 4.2

Let \({\mathbb{L}}\) be a compact linear bounded operator with a weakly singular kernel defined on the Banach space \({\rm X} \to {\rm X}\), where \({\mathbb{X}} = {\text{L}}^{2} \left[ {a,b} \right]\), such that

$$ {\mathbb{L}}\left( {u\left( t \right)} \right) = \int\limits_{a}^{b} {\frac{u\left( t \right)}{{\left| {x - t} \right|^{\alpha } }}dt} ; \, 0 < \alpha < 1,a \le x \le b, $$
(44)

and

$$ {\mathbb{L}}\left( {\tilde{u}_{n} \left( t \right)} \right) = \int\limits_{a}^{b} {\frac{{\tilde{u}_{n} \left( t \right)}}{{\left| {x - t} \right|^{\alpha } }}dt} ; \, 0 < \alpha < 1,a \le x \le b. $$
(45)

Then,

$$ \varepsilon_{n} = \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\| = 0. $$
(46)

Proof

Substituting Eqs. (44) and (45) into Eq. (6) we get

$$ u\left( x \right) = \varphi \left( x \right) + {\mathbb{L}}\left( {u\left( t \right)} \right),\tilde{u}_{n} \left( x \right) = \varphi \left( x \right) + {\mathbb{L}}\left( {\tilde{u}_{n} \left( t \right)} \right). $$
(47)

Thus, we have

$$ \begin{aligned} \varepsilon _{n} = & \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|{\ominus } = \left\| {{\mathbb{L}}\left( {u\left( t \right)} \right) - {\mathbb{L}}\left( {\tilde{u}_{n} \left( t \right)} \right)} \right\| = \left\| {\int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}u\left( t \right)dt} - \int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}\tilde{u}_{n} \left( t \right)dt} } \right\| \\ = & \left[ {\int\limits_{a}^{b} {\left| {\int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}u\left( t \right)dt} - \int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}\tilde{u}_{n} \left( t \right)dt} } \right|^{2} dx} } \right]^{{\frac{1}{2}}} = \left[ {\int\limits_{a}^{b} {\left| {\int\limits_{a}^{b} {\frac{{u\left( t \right)}}{{\left| {x - t} \right|^{\alpha } }}dt} - \sum\limits_{{i = 0}}^{n} {\tilde{\gamma }_{i} x^{i} } } \right|^{2} dx} } \right]^{{\frac{1}{2}}} . \\ \end{aligned} $$
(48)

Here, we have

$$ \int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}\tilde{u}_{n} \left( t \right)dt} = \int\limits_{a}^{b} {{\text{X}}\left( x \right){\text{AKBX}}^{T} \left( t \right){\text{X}}\left( t \right)\Omega dt} = {\text{X}}\left( x \right){\text{AKBH}}\Omega = {\text{X}}\left( x \right)\tilde{\Omega }, $$
(49)

where

$$ \tilde{\Omega } = {\text{AKBH}}\Omega { = }\left[ {\tilde{\gamma }_{i} } \right]_{i = 0}^{n} , $$
(50)

and

$$ \int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}u\left( t \right)dt} = \int\limits_{a}^{b} {{\text{X}}\left( x \right){\text{FX}}^{T} \left( t \right){{\tilde{\tilde{X}}}}\left( t \right)\tilde{\tilde{\Delta }}dt} = \int\limits_{a}^{b} {{\text{X}}\left( x \right){\text{FZ}}\left( t \right)\tilde{\tilde{\Delta }}dt} = {\text{X}}\left( x \right){{F\tilde{Z}}}\tilde{\tilde{\Delta }} = {\text{X}}\left( x \right){\text{E}}, $$
(51)
$$ \begin{gathered} {\text{F}} = {\text{AKB }} = \left[ {f_{ij} } \right]_{i,j = 0}^{n} ,\quad {\text{E}} = {\tilde{\text{Z}}}\tilde{\tilde{\Delta }},\;{\text{Z}}\left( t \right) = {\text{X}}^{T} \left( t \right){{\tilde{\tilde{X}}}}\left( t \right);\;{{\tilde{\tilde{X}}}}\left( t \right) = \mathop {\lim }\limits_{m \to \infty } \left[ {t^{q} } \right]_{q = 0}^{m} , \hfill \\ \tilde{\tilde{\Delta }} = \mathop {\lim }\limits_{m \to \infty } \left[ {\alpha_{q} } \right]_{q = 0}^{m} , \hfill \\ \end{gathered} $$
(52)

and

$$ {\tilde{\text{Z}}} = \int\limits_{a}^{b} {{\text{Z}}\left( t \right)dt} = \mathop {\lim }\limits_{m \to \infty } \left[ {z_{iq} } \right]_{i,q = 0}^{n,m} ;z_{iq} = \int\limits_{a}^{b} {t^{i + q} dt} = \frac{{b^{i + q + 1} - a^{i + q + 1} }}{i + q + 1};i = \overline{0,n} , \quad q = 0,1,2,.... $$
(53)

Let \({\rm E} - \tilde{\Omega }\) be a column matrix with entries denoted by \(\lambda_{i}\). Then, we have proven that\limits_{i = 0}^{n} {\gamma_{i} x^{i}

$$ \begin{aligned} \varepsilon_{n} & = \left\| {u\left( x \right) - \tilde{u}_{n} \left( x \right)} \right\|{\ominus }\left\{ {\left\{ = \right\}} \right\}\left[ {\int\limits_{a}^{b} {\left| {\int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}u\left( t \right)dt} - \int\limits_{a}^{b} {\frac{1}{{\left| {x - t} \right|^{\alpha } }}\tilde{u}_{n} \left( t \right)dt} } \right|^{2} dx} } \right]^{\frac{1}{2}} \\ &= \left[ {\int\limits_{a}^{b} {\left| {X\left( x \right)\left( {{\rm E} - \tilde{\Omega }} \right)} \right|^{2} dx} } \right]^{\frac{1}{2}} = \mathop {\lim }\limits_{m \to \infty } \left[ {\sum\limits_{i = 0}^{n} {\sum\limits_{q = 0}^{m} {\frac{{\lambda_{i} \lambda_{j} }}{i + q + 1}} } \left( {b^{i + q + 1} - a^{i + q + 1} } \right)} \right]^{\frac{1}{2}} = 0. \\ \end{aligned} $$
(54)

Computational results and discussions

Based on the two presented techniques, we designed two MATLAB R2019b codes for the solution of four weakly singular Fredholm integral equations of the second kind. We find the interpolant solutions for the six examples by applying the two given techniques and compare the obtained results with the exact solutions. The obtained interpolated solutions to examples 1–4 strongly converge to the exact ones faster than the methods mentioned in [20, 21]. Moreover, the interpolation is found easily and uniformly rather than these complicated results that are mentioned in [20, 21], as shown in the given tables and figures that are superior. The obtained interpolate solutions to the examples 5–6 for nonsingular equations are found equal to the exact solution. We denoted the exact solution by \(u_{ex} \left( x \right)\) and the interpolant solutions obtained by using the first and second techniques are denoted by \(\tilde{u}_{n}^{1} \left( x \right)\) and \(\tilde{u}_{n}^{2} \left( x \right)\), respectively, where \(n\) denotes the interpolant degree.

Example 1

Consider the integral equation,

$$ u\left( x \right) = x^{2} - \frac{16}{{15}} + \int\limits_{0}^{1} {\frac{u\left( t \right)}{{\sqrt {1 - t} }}} dt; \, 0 \le x \le 1, $$
(55)

whose exact solution is given by \(u_{ex} \left( x \right) = x^{2}\) [20]. Using the first technique for \(\delta_{1} = \delta_{2} = 0\) with \(x = 1\) in the kernel, we obtained uniformly interpolated polynomials \(\tilde{u}_{n}^{1} \left( x \right)\) for \(n = 2,5\). The CPU time for \(n = 2\) was 8.294 s and for \(n = 5\) was 12.893 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and then estimated the absolute errors \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right|\) for \(n = 2,5\) as shown in Table 1. In Fig. 1, plotted are the graphs of absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5\). Using the second technique for \(n = 4,6\) and \(\delta_{1} = 0\), \(\delta_{2} = 1/150\), we obtained the uniformly interpolated polynomials \(\tilde{u}_{4}^{2} \left( x \right),\tilde{u}_{6}^{2} \left( x \right)\). The total CPU time for \(n = 4\) was 12.356 s and for \(n = 6\) was 18.000 s. We evaluated \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{2} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{2} \left( {x_{i} } \right)} \right|\) for \(n = 4,6\) as shown in Table 2. In Fig. 2, plotted are the graphs of the absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6.\)

Table 1 The absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5\)
Fig. 1
figure 1

The First Technique

Table 2 The absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6\)
Fig.2
figure 2

The First Technique

Example 2

Consider the integral equation,

$$ u\left( x \right) = \sqrt x - \frac{\pi }{2} + \int\limits_{0}^{1} {\frac{u\left( t \right)}{{\sqrt {1 - t} }}} dt; \, 0 \le x \le 1, $$
(56)

whose exact solution is given by \(u_{ex} \left( x \right) = \sqrt x\) [20]. Using the first technique for \(n = 2,3\), and \(\delta_{1} = \delta_{2} = 0\) while \(x = 1\) in the kernel, we obtained the uniformly interpolated polynomials \(\tilde{u}_{2}^{1} \left( x \right),\tilde{u}_{3}^{1} \left( x \right)\). The CPU total time for \(n = 2\) was 8.282 s and for \(n = 3\) was 9.013 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and the interpolated polynomials \(\tilde{u}_{2}^{1} \left( {x_{i} } \right), \, \tilde{u}_{3}^{1} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right|\) for \(n = 2,3\) as shown in Table 3. In Fig. 3, plotted are the graphs of the absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,3.\) Using the second technique for \(n = 8,12\) and \(\delta_{1} = 0,\delta_{2} = 1/300,\) we obtained the uniformly interpolated polynomials \(\tilde{u}_{8}^{2} \left( x \right), \, \tilde{u}_{12}^{2} \left( x \right)\). The CPU total time for \(n = 8\) was 24.775 s and for \(n = 12\) was 44.715 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{8}^{2} \left( {x_{i} } \right), \, \tilde{u}_{12}^{2} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{2} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{2} \left( {x_{i} } \right)} \right|\) for \(n = 8,12\) as shown in Table 4. In Fig. 4, plotted are the graphs the absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 8,12\).

Table 3 The absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,3\)
Fig. 3
figure 3

The First Technique

Table 4 The absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 8,12\)
Fig. 4
figure 4

The Second Technique

Example 3

Consider the integral equation,

$$ u\left( x \right) = e^{x} - 4.0602 + \int\limits_{0}^{1} {\frac{u\left( t \right)}{{\sqrt {1 - t} }}} dt; \, 0 \le x \le 1, $$
(57)

whose exact solution is given by \(u_{ex} \left( x \right) = e^{x}\) [20]. Using the first technique for \(\delta_{1} = \delta_{2} = 0\) with \(x = 1\) in the kernel, we obtain \(\tilde{u}_{n}^{1} \left( x \right)\) for \(n = 3,5\). The CPU total time for \(n = 3\) was 8.882 s and for \(n = 5\) was 11.830 s. We evaluated the exact solution values \(u_{ex} \left( {x{}_{i}} \right)\) and the uniformly interpolated polynomials \(\tilde{u}_{3}^{1} \left( {x_{i} } \right), \, \tilde{u}_{5}^{1} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right|\) for \(n = 3,5\) as shown in Table 5. In Fig. 5, plotted are the graphs of the absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 3,5\). Using the second technique for \(\delta_{1} = 0,\delta_{2} = 1/250\), we obtain the uniformly interpolated polynomials \(\tilde{u}_{n}^{2} \left( x \right)\) for \(n = 6,8\). The CPU total time for \(n = 6\) was 15.317 s and for \(n = 8\) 22.591 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{6}^{2} \left( {x_{i} } \right), \, \tilde{u}_{8}^{2} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{2} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{2} \left( {x_{i} } \right)} \right|\) for \(n = 6,8\) as shown in Table 6. In Fig. 6, plotted are the graphs of the absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 6,8\).

Table 5 The absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 3,5\)
Fig. 5
figure 5

The First Technique

Table 6 The absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 6,8\)
Fig. 6
figure 6

The Second Technique

Example 4

Consider the integral equation,

$$ u\left( x \right) = f\left( x \right) + \frac{1}{10}\int\limits_{0}^{1} {\left| {x - t} \right|^{{ - \tfrac{1}{3}}} u\left( t \right)} \, dt; \, 0 \le x \le 1, $$
(58)

where

$$ f\left( x \right) = x^{2} \left( {1 - x^{2} } \right) - \frac{27}{{30800}}\left[ {x^{8/3} \left( {54x^{2} - 126x + 77} \right) + \left( {1 - x} \right)^{8/3} \left( {54x^{2} + 18x + 5} \right)} \right], $$
(59)

whose exact solution is given by \(u_{ex} \left( x \right) = x^{2} \left( {1 - x} \right)^{2}\) [21]. Using the first technique for \(n = 2,5\) and \(\delta_{1} = 0,\delta_{2} = 1/5\), we obtain \(\tilde{u}_{2}^{1} \left( x \right),\tilde{u}_{5}^{1} \left( x \right)\). The CPU total time for \(n = 2\) was 9.778s and for \(n = 5\) was 15.501s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and the interpolated solution values \(\tilde{u}_{2}^{1} \left( {x_{i} } \right), \, \tilde{u}_{5}^{1} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right|\) for \(n = 2,5\) as shown in Table 7. In Fig. 7, plotted are the graphs of the absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5\). Using the second technique for \(n = 2,3,4\) and \(\delta_{1} = 1/15,\delta_{2} = 0,\) we obtained \(\tilde{u}_{2}^{2} \left( {x_{i} } \right),\tilde{u}_{3}^{2} \left( {x_{i} } \right),\tilde{u}_{4}^{2} \left( {x_{i} } \right)\) at the set of nodes \(x_{i} = 0:0.1:1.0\). The CPU total time for \(n = 2\) was 10.331 s, for \(n = 3\) was 11.713 s, and for \(n = 4\) was 15.699 s. Table 8, contains the absolute errors \(R_{n}^{2} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{2} \left( {x_{i} } \right)} \right|\) for \(n = 2,3,4\) at the set of nodes \(x_{i} = 0:0.1:1.0\). In Fig. 8, plotted are the graphs of the absolute errors \(R_{2}^{2} \left( {x_{i} } \right)\), \(R_{3}^{2} \left( {x_{i} } \right)\) and \(R_{4}^{2} \left( {x_{i} } \right)\) at \(x_{i} = 0:0.1:1.0.\)

Table 7 The absolute errors \(R_{2}^{1} \left( {x_{i} } \right)\) and \(R_{5}^{1} \left( {x_{i} } \right)\)
Fig. 7
figure 7

The First Technique

Table 8 The absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 2,3,4\)
Fig. 8
figure 8

The Second Technique

Example 5

Consider the nonsingular Fredholm integral equation of the second kind.

$$ u\left( x \right) = e^{ - x} + \int\limits_{0}^{1} {e^{x + t} } u\left( t \right)dt; \, 0 \le x \le 1 $$
(60)

whose exact solution is given by \(u_{ex} \left( x \right) = e^{ - x} + \frac{{2e^{x} }}{{3 - e^{2} }}\) [35]. Using the first technique for \(\delta_{1} = \delta_{2} = 0\), we obtained uniformly interpolated polynomials \(\tilde{u}_{n}^{1} \left( x \right)\) for \(n = 2,5,10\). The CPU time for \(n = 2\) was 7.987 s and for \(n = 5\) was 12.859 s, and for \(n = 10\) was 36.077 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5,10\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and then estimated the absolute errors \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right|\) for \(n = 2,5,10\) as shown in Table 9. In Fig. 9, plotted are the graphs of absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5,10\). Using the second technique for \(n = 2,5,10\) and \(\delta_{1} = \delta_{2} = 0\), we obtained the uniformly interpolated polynomials \(\tilde{u}_{n}^{2} \left( x \right)\) for \(n = 2,5,10\). The CPU total time for \(n = 2\) was 8.086 s, for \(n = 5\) was 13.579 s and for \(n = 10\) was 31.443 s. We evaluated \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6\) at the set of nodes \(x_{i} = 0:0.1:1.0\) and hence estimated the absolute errors \(R_{n}^{2} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{2} \left( {x_{i} } \right)} \right|\) for \(n = 4,6\) as shown in Table 2. In Fig. 2, plotted are the graphs of the absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6.\) The generated interpolated solutions strongly converge to the exact ones when employing either the first or second technique (Table 10, Fig. 10).

Table 9 The absolute errors \(R_{n}^{1} \left( {x_{i} } \right)\) for \(n = 2,5,10\)
Fig. 9
figure 9

The First Technique

Fig. 10
figure 10

The Second Technique

Table 10 The absolute errors \(R_{n}^{2} \left( {x_{i} } \right)\) for \(n = 4,6\)

Example 6

Consider the nonsingular Fredholm integral equation of the second kind.

$$ u\left( x \right) = x + \int\limits_{ - 1}^{1} {\left( {x^{4} - t^{4} } \right)} u\left( t \right)dt; \, - 1 \le x \le 1 $$
(61)

whose exact solution is given by \(u_{ex} \left( x \right) = x\) [36]. Using formula (10) of the first technique for \(\delta_{1} = \delta_{2} = 0\), we obtained the interpolated polynomials \(\tilde{u}_{n}^{1} \left( x \right)\) for \(n = 5,6\). The CPU time for \(n = 5\) was 11.474 s and for \(n = 6\) was 11.777 s. We evaluated the exact solution values \(u_{ex} \left( {x_{i} } \right)\) and \(\tilde{u}_{n}^{1} \left( {x_{i} } \right)\) for \(n = 5,6\) at the set of nodes \(x_{i} = - 1:0.2:1.0\) and then estimated the absolute errors. It turns out that \(R_{n}^{1} \left( {x_{i} } \right) = \left| { u_{ex} \left( {x_{i} } \right) - \tilde{u}_{n}^{1} \left( {x_{i} } \right)} \right| = 0\) for \(n = 5,6\). Using the formulas (26) and (27) of the second technique for \(\delta_{1} = \delta_{2} = 0\), we obtained the interpolated polynomials \(\tilde{u}_{n}^{2} \left( x \right)\) for \(n \ge 2\) exactly equal to the exact solution. The CPU total time for \(n = 2\) was 9.659 s.

Conclusion

We modified the traditional barycentric Lagrange interpolation formula and expressed it as a product of four matrices; one of which is the monomial basis function matrix. Based on this advanced formula, we presented two techniques for finding the interpolate solutions of weakly singular Fredholm integral equations of the second kind. The kernel is interpolated twice with respect to both variables and thus has been expressed via five matrices. The advantage of the presented techniques is that we can isolate the singularity of the kernel and easily find the interpolant solution in matrix form without applying the collocation method. The most important advantage lies in the idea of the given rules for choosing two different sets of interpolation nodes associated with the kernel’s two variables so that the square root of the kernel remains greater than zero and has nonimaginary values. Thus, the singularity is completely removed. The convergence in the mean and the error norm estimation are studied. The interpolate solutions of the first four illustrated examples to weakly singular equations are found strongly converge uniformly to the exact ones. The convergence of the obtained solutions is faster than those obtained by other cited methods. The interpolate solutions of the fifth and sixth examples to nonsingular equations are found equal to the exact ones. Thus, the efficiency and genuineness of the given method are confirmed.

Availability of data and materials

Not applicable.

References

  1. Kendall EA. The Numerical Solution of Integral Equations of the Second Kind. Cambridge: Cambridge University Press; 2010.

    MATH  Google Scholar 

  2. Farhad SHL, Reza G, Vladimir IO. Novel single-source surface integral equation for scattering problems by 3-D dielectric objects. IEEE Trans Antennas Propagation. 2018;66(2):797–807.

    Article  MathSciNet  Google Scholar 

  3. Adrian SB, Andriulli FP, Eibert TF. On a refinement-free Calderón multiplicative preconditioner for the electric field integral equation. Physics. 2019;376:1232–52.

    MathSciNet  MATH  Google Scholar 

  4. Shoukralla ES. Numerical Solution of Helmholtz Equation for an Open Boundary in Space. J Appl Math Modeling. 1997;21:231–2.

    Article  Google Scholar 

  5. Qin SL, Sheng S, Weng CC. A Potential-Based Integral Equation Method for Low-Frequency Electromagnetic Problems. IEEE Trans Antennas Propag. 2018;66(3):1413–26.

    Article  Google Scholar 

  6. Dmitriev VI, Dmitrieva IV, Osokin NA. Solution of an Integral Equation of the first Kind with Logarithmic Kernel. Comput Math Model. 2018;29(3):307–18.

    Article  MathSciNet  Google Scholar 

  7. Shoukralla ES. A Technique for the Solution of a Certain Singular Integral Equation of The First Kind. Int J Computer Math. 1998;69:165–73.

    Article  MathSciNet  Google Scholar 

  8. Shoukralla ES. Approximate solution to weakly singular integral equations. J Appl Math Modeling. 1996;20:800–3.

    Article  Google Scholar 

  9. Shoukralla ES. A Numerical Method for Solving Fredholm Integral Equations of the First Kind with Logarithmic Kernels and Singular Unknown Functions. J Appl Comput Math. 2020;6:172.

    MathSciNet  MATH  Google Scholar 

  10. Shoukralla ES. Application of Chebyshev Polynomials of the Second Kind to the Numerical Solution of Weakly Singular Fredholm Integral Equations of the First Kind. IAENG Int J Appl Math. 2021;51:8.

    Google Scholar 

  11. Shoukralla ES, Markos MA. The economized monic Chebyshev polynomials for solving weakly singular Fredholm integral equations of the first kind. Asian-Eur J Math. 2020;12:1.

    MathSciNet  MATH  Google Scholar 

  12. Shoukralla ES, Markos MA. Numerical Solution of a Certain Class of Singular Fredholm Integral Equations of the First Kind via the Vandermonde Matrix. Int J Math Models Methods Appl Sci. 2020;14:48–53.

    Google Scholar 

  13. Shoukralla ES, Kamel M, Markos MA. A new computational method for solving weakly singular Fredholm integral equations of the first kind. In: 13th IEEE International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, IEEE Xplore; 2018.

  14. Reza B, Emran T, Faezeh T. Numerical solution of weakly singular Fredholm integral equations via generalization of the Euler-Maclaurin summation Formula. J Taibah Univ Sci. 2014;8:199–205.

    Google Scholar 

  15. Yang Y, Tang Z, Huang Y. Numerical Solutions for Fredholm Integral Equations of The Second Kind with Weakly Singular Kernel Using Spectral Collocation Method. Appl Math Computation. 2019;349:314–24.

    Article  MathSciNet  Google Scholar 

  16. Azizallah A, Mahmoud P. Reproducing kernel method for a class of weakly singular Fredholm integral equations”. J Taibah Univ Sci. 2018;12(4):409–14.

    Article  Google Scholar 

  17. Bijaya LP, Moumita M, Gnaneshwar N. Legendre Multi-Galerkin Methods for Fredholm Integral Equations with Weakly Singular Kernel and the Corresponding Eigenvalue Problem J. Comput Appl Math. 2019;346:224–36.

    Article  MathSciNet  Google Scholar 

  18. Guebba H. “Regularization and Fourier Series for Fredholm Integral Equations of the Second Kind with a Weakly Singular Kernel. Numer Funct Anal Optim. 2018;39(1):1–10.

    Article  MathSciNet  Google Scholar 

  19. Filmomena D, Rosario F. Projection Methods Based on Grids for Weakly Singular Integral Equations. Appl Numer Math. 2017;114:47–54.

    Article  MathSciNet  Google Scholar 

  20. Boichuk OA, Feruk VA. Linear Boundary-Value Problems for Weakly Singular Integral Equations. J Math Sci. 2020;247:2.

    Article  MathSciNet  Google Scholar 

  21. Behera S, Saha Ray S. Euler wavelets method for solving fractional-order linear Volterra-Fredholm integro-diferential equations with weakly singular kernels. Comput Appl Math. 2021;40:192.

    Article  Google Scholar 

  22. Ali S, Taher L, Tofigh A, Mahmoud P. an effective collocation technique to solve the singular Fredholm integral equations with Cauchy kernel. Adv Diff Eq. 2017;280:2.

    MathSciNet  MATH  Google Scholar 

  23. Rehman S, Pedas A, Vainikko G. Fast solvers of weakly singular integral equations of the second kind. Math Model Anal. 2018;23(4):639–64.

    Article  MathSciNet  Google Scholar 

  24. Shoukralla ES. Interpolation method for solving weakly singular integral equations of the second kind. Appl Comput Math. 2021;10(3):76–85.

    Article  Google Scholar 

  25. Shoukralla ES. Interpolation Method for Evaluating Weakly Singular Kernels. J Math Comput Sci. 2021;11(6):7487–510.

    Google Scholar 

  26. Shoukralla ES, Elgohary H, Ahmed BM. Barycentric Lagrange interpolation for solving Volterra integral equations of the second kind. J Phys England. 2020;1447:012002.

    Google Scholar 

  27. Shoukralla ES, Ahmed BM. Numerical Solutions of Volterra Integral Equations of the Second Kind using Lagrange interpolation via the Vandermonde matrix. J Phys. 2020;1447:012003.

    Google Scholar 

  28. Shoukralla ES, Ahmed BM. Multi-techniques method for Solving Volterra Integral Equations of the Second Kind. In: 14th International Conference on Computer Engineering and Systems (ICCES). IEEE. 2019.

  29. The Barycentric Lagrange Interpolation via Maclaurin Polynomials for Solving the Second Kind Volterra Integral Equations. In: IEEE 15th International Conf. on Computer Engineering and Systems (ICCES 2020), Cairo, Egypt.

  30. Berrut J-P, Trefethen LN. Barycentric Lagrange Interpolation. Soc Indus Appl Math. 2004;46(3):501–17.

    MathSciNet  MATH  Google Scholar 

  31. Nicholas J. The numerical stability of Barycentric Lagrange interpolation. IMA J Numer Anal. 2004;24:547–56.

    Article  MathSciNet  Google Scholar 

  32. Gander W. Change of basis in polynomial interpolation. Numer Linear Algebra Appl. 2005;12:769–78.

    Article  MathSciNet  Google Scholar 

  33. Cvetkovski Z. Inequalities, theorems, techniques, and selected problems. Berlin: Springer; 2012.

    MATH  Google Scholar 

  34. Fabio SB. Real Analysis and Application. Berlin: Springer; 2018.

    Google Scholar 

  35. Shoukralla ES, Elgohary H, Morgan M. Shifted Legendre Polynomials for Solving Second Kind Fredholm Integral Equations. Menoufia J Elect Eng Res (MJEER). 2021;30(1):76–83.

    Article  Google Scholar 

  36. Shoukralla ES, El-Serafi SA, Elgohary H, Morgan M. A Computational Method for Solving Fredholm Integral Equations of the Second Kind. Comput Methods. 2019;28:280–5.

    Google Scholar 

Download references

Acknowledgements

We express our gratitude to the anonymous referees for their constructive reviews of the manuscript and for helpful comments.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

SES developed the theoretical formalism, Both NS and AYS performed the analytic calculations and performed the numerical simulations and contributed to the final version of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ahmed Y. Sayed.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shoukralla, E.S., Saber, N. & Sayed, A.Y. Computational method for solving weakly singular Fredholm integral equations of the second kind using an advanced barycentric Lagrange interpolation formula. Adv. Model. and Simul. in Eng. Sci. 8, 27 (2021). https://doi.org/10.1186/s40323-021-00212-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40323-021-00212-6

Keywords