On convergence and error analysis of the parametric iteration method

Parametric iteration method falls under the category of the analytic approximate methods for solving various kinds of nonlinear differential equations. Its convergence only for some special problems has been proved. However in this paper, an analysis of error is presented, then due to it, the convergence of method for general problems is proved. To assess the performance of the claimed error bound and also the convergence of the method, numerical experiments are presented performed in MATLAB 2012b.


Introduction
Parametric iteration method (PIM) is an analytic approximate method for solving linear and nonlinear problems proposed in [1]. At beginning it was proposed for solving nonlinear fractional differential equations by modifying He's variational iteration method (VIM) [2]. The PIM enjoys some augmented factors which made it more completed than the VIM. In fact by adjusting these factors one can establish more accurate approximations in comparison with the VIM. During recent decade, many researchers have been worked on the VIM for solving various kinds of problems which mentioning them is out of the scope of this paper. Besides, some authors have been centered on the convergence of the VIM for some specific problems, like for multi-order fractional DE's [3], multi-delay DE's [4], ODE's [5], systems of ODE's [6] and etc. Herein, the work of Odibat [7] is more interesting and different because of its generality. In fact, he concluded the convergence of the VIM by introducing a semi-contraction operator and completed the proof like the proof of the Banach's fixed point theorem. On the other hand, the PIM was utilized for solving various kind of differential equations like Abel equation [8], nonlinear chaotic Genesio system [9], boundary value problems [10], linear optimal control problems [11] and etc. Convergence theorem for some particular cases was discussed in some of these literature (e.g. [8,9]), but what is still missing is a proof of the convergence of the PIM for a general differential equation. Also, from the both theoretical and practical viewpoint, another necessary talk is a complete discussion about the error bound of the approximations. Therefore, the goal of this article is to establish an error term and then presenting a general proof of the convergence of the PIM.

Parametric iteration method (PIM)
To explain the basic idea of the PIM considers the following differential equation: (1) Where is a nonlinear operator, denotes the time, and is an unknown variable. First consider (1) as below: Where and denote linear and nonlinear differential operator of the unknown respectively, and is the source term. We then construct a family of iterative formulas as: (3) where . In this formula and denote the so-called auxiliary parameter and auxiliary function respectively. In this work we take Accordingly, the successive approximations will be readily obtained by choosing the zeroth component . (For more details about PIM see [1]).

Error analysis and convergence
Consider the following nonlinear problem (4) Where and are defined as: Where and are continuous real functions on . In we use the infinity norm i.e. for vector we have and for every we use the maximum norm as . Also the norm of vector functions like is: (6) In order to use the PIM, we rewrite (4) as (7)  Where is an auxiliary linear operator and is the nonlinear operator. Then the constructed iteration formula by PIM will be defined by: (8) Taking and choosing the initial approximation in the above sequence, clearly we can say that . So the following lemma will be obtained. Lemma 3.1: For every and for every (9) Now the iteration formula (8) can be written as (10) Let's denote n th approximation by and then the convergence of is due to the norm (11) Before presenting the main theorem, we restate the Lipschitz condition for the vector function . Suppose that for every the component of function , there exist a positive real constant such that for every and for every and the following condition satisfies: (12) In this situation, letting we can say that satisfies a Lipschitz condition with respect to the first argument with the Lipschitz constant L, i.e.
(13) Theorem 3.2: Assume that is continuous on where and satisfies a Lipschitz condition on with respect to the first argument with the Lipschitz constant L. Also suppose that is bounded on to a positive real number . Then for two arbitrary successive approximations we have (14) Proof: If we denote the approximate solution obtained by the first iteration with and , according to (10) and noticing that where and we can write (15) Now, let in (10), using the notation , we have We rearrange the final statement as bellow (16) And similarly In summary, this argument will lead to the following general form The maximum of left hand side on index satisfies (18) too, due to the fact that the right hand side of (18) is independent from index . So, due to the defined norm (6), taking maximum of both side of (18) on all , we have: This completes the proof. ■ Now we want to prove that if we choose such that then the right hand side of (14) vanishes when n tends to infinity. First we prove an auxiliary lemma.

Numerical experiments
To demonstrate the efficiency of the error bound defined by (14) we consider the following two dimensional test problems.

(28)
Where the time domain is and the exact solutions from [12] are: In view of (4) and (5) we have: is linear and obviously, it satisfies a Lipschitz condition by Lipschitz constant . Choosing , the is bounded by and using the notation for the error bound described in (14) we have: Furthermore the norm of direct difference of two successive approximations and appeared in the left hand side of (14) is denoted by (33) Then for every and for all the numerical results must confirm the relation to ensure that the theoretical result of the Theorem 3.2 is reliable. Also in order to discuss the convergence of the PIM claimed in the Corollary 3.5, we denote the absolute error by and define it by (34) In Fig.1, we plot , and obtained by the PIM with and .
Other different values of and is discussed in Fig.2 and Table 1. Fig.1: Error bound: As could be seen from the plot of , the estimated error bound is really an upper bound for which confirms the inequality (14). This is true for every iteration as zoomed out part shows. Plot of also shows that by increasing the error bound vanishes which is a confirmation of theorem 3.4. Cauchy sequence: The plot of shows that when grows. This means that for every and for sufficiently large , which confirms (27). The latter can be used to conclude the sequence is a Cauchy sequence.

Analysis of the
Convergence: The plot of shows that for sufficiently large , , and this is a confirmation of convergence of the sequence constructed by the PIM. In Fig.2, we plot and for the solutions of the PIM by various and . As could be seen the error bound confirms what is claimed in theorem 3.2 and 3.4. Although Fig.1 and Fig.2 provide us good information of error bound, but what is doesn't show is a specified data for the convergence rate. So to study of the convergence rate we report the final values of and for and in Table 1.  Table 1 shows that for the convergence is excellent but for we need much more iterations to conclude the convergence. For , although seems to be appropriate for vanishing but bigness of in the same formula (32) makes very big. On the other hand, for , similarly but is not very big in opposite of the case , consequently the convergence is faster. From this viewpoint, seems to be the best choice since , however there exist many counterexamples in nonlinear problems showing that some values other than can give better approximations. Such an argument leads to a known problem that is finding an optimal value of accelerating parameter which in general is an open problem in this field. By the results of this paper a proposal is minimizing the error term as a function of the parameter as a variable which is left to the further works.

Conclusion
In this paper, a convergence analysis of the PIM is presented. This is performed by establishing a novel error bound and showing this error bound tends to zero. Also an interesting result has been concluded for the auxiliary parameter h. Although finding optimal h in general is an open problem, but we hope that the results of this paper are a promising tools for researchers. Our proposal is to find optimal h by minimizing the presented error term as a function of , which is left to the further works.