Some Novel Newton-Type Methods for Solving Nonlinear Equations

abstract: The aim of this paper is to present a new nonstandard Newton iterative method for solving nonlinear equations. The convergence of the proposed method is proved and it is shown that the new method has cubic convergence. Furthermore, two new multi-point methods with sixth-order convergence, based on the introduced method, are presented. Also, we describe the basins of attraction for these methods. Finally, some numerical examples are given to show the performance of our methods by comparing with some other methods available in the literature.


Introduction
Finding the root of a nonlinear equation f (x) = 0 is one of the most important problems in applied sciences and engineering.In the current paper, we consider the problem of finding a real simple zero α of a function f : I ⊂ R → R i.e., f (α) = 0 and f ′ (α) = 0. Here, f is a sufficiently differentiable function on the open interval I and α ∈ I.Among the techniques, iterative methods are powerful tool for solving nonlinear equation f (x) = 0, (see for example [1,2,3,4,5,6,7,8,9] and the references therein).Also, for a relatively comprehensive survey on the multi-point iterative methods the readers are referred to the article [10].We recall here some classical definitions which will be useful in the sequel.

112
M. Bisheh-Niasar and A. Saadatmandi Definition 1.1.[11](Order of convergence) Let {x n } ∞ n=0 converge to α and assume that x n = α for each n.The rate of convergence of {x n } to α is of order p with asymptotic error constant C, if where p ≥ 1 and C > 0. Definition 1.2.[12](Efficiency index) The efficiency index is defined as p 1/d , where p is the order of convergence and d is the total number of functional evaluations per iteration.
In fact, the efficiency index gives a measure of the balance between the order of convergence and the number of functional evaluations per step [12].It is worthy to mention here that, according to the Kung-Traub's conjecture [12], an optimal iterative method, without memory, based upon d evaluations could achieve a convergence order of 2 d−1 .Here, iterative method without memory is a scheme whose (n + 1)th iteration is obtained by using only the previous nth iteration.
Perhaps the most widely used among all one-dimensional root-finding algorithms is the classical Newton's method (also known as the Newton-Raphson method) It is known that this method has second order of convergence to simple roots.Also the efficiency index of Newton's method is √ 2 ≈ 1.414, because it uses f (x n ) and f ′ (x n ) per iteration.
In this work, the idea behind in nonstandard finite difference method [14,15] elegantly combined with the Newton's method, is used to develop a new third-order iterative method.Moreover, based on the introduced method, we developed two new multi-point methods with sixth-order of convergence.
The organization of the rest of this paper is as follows: In Section 2, we construct a nonstandard Newton iterative method and also the analysis of convergence for this new method is presented.In Section 3, from this new method we obtain two new composite sixth-order methods.In Section 4, the Basins of attraction using the new methods are presented.In Section 5, several numerical results are given to show the efficiency of our methods.Also, a comparison is made with the existing results.Section 6 ends this paper with a brief conclusion.

The nonstandard Newton iterative method and convergence analysis
We assume that α is a real simple root of a nonlinear equation f (x) = 0 and x 0 is an initial guess sufficiently close to α.Let f (x) be sufficiently smooth in the neighborhood of the root α.Then using Taylor's expansion about the point α, we obtain substituting x = α into Eq.(2.1) gives Now, following the ideas of nonstandard finite difference method, developed by Mickens [14,15], the (α − x 0 ) term on the left-side of Eq. (2.3) is replaced by where b is a parameter.Substituting Eq. (2.4) into the (α − x 0 ) term on the left-side of Eq. ( 2.3), we have By the Taylor's expansion of the term (e b(α−x0) − 1)/b in Eq. (2.5), we obtain 3) and (2.4), we obtain Based on Eq. (2.7), the following nonstandard Newton iterative method is suggested: Now, we discuss the convergence analysis of scheme (2.8).

Convergence of the method
Theorem 2.1.Let α ∈ R be a simple zero of sufficiently differentiable function be a monotone increasing function on I, then the method defined by (2.8) converges to α with cubic order of convergence and the error function is ) Proof: By using Taylor's expansion about α, we have Some Novel Newton-Type Methods for Solving Nonlinear Equations 115 (2.12) By using these expansions, we get By Taylor series expansion of ln(1 − w) and using computer algebra software as Maple, we have This completes the proof.✷ For the computational cost, scheme (2.8) requires the evaluations of f (x n ), f ′ (x n ) and f ′′ (x n ) per iteration.This gives 3 1 3 ≈ 1.442 as an efficiency index of this method.Now let us suppose that x n+1 = φ(x n ) define an one-point iterative method.As pointed by Traub [12,Th. 5.3], to get a method of order p, we must use all derivatives up to order p − 1.Therefore, the main practical difficulty associated with one-point iterative methods is the evaluation of higher order derivatives.Fortunately, multi-poit iterative methods overcome this limitation.In the next section, based on the scheme (2.8), we are going to construct two new multi-point iterative methods with sixth-order convergence.

Construction of two multi-point iterative methods
In recent years there has been a growing interest in multi-point methods (e.g., see the survey paper [10] and references therein) for solving nonlinear equations.Here, we will improve the convergence rate of (2.8).The main advantage of these methods is they have order six and they do not require the evaluation of any third or higher order derivatives.Specifically, we propose the following iterative methods Method I : and Method II : (3.2)

Convergence of the methods
Now, we shall prove that Method I and Method II have sixth-order of convergence.
Theorem 3.1.Under the assumptions of Theorem(2.1),the method described by (3.1) has sixth-order convergence to α and satisfies the following error equation: Proof: Dividing Eqs.(2.10) and (2.11), we get From the first step of the Method I and Eq.(3.3), we have By the Taylor's expansion of f (y n ), f ′ (y n ) and f ′′ (y n ) around α and using Eq.(3.4), we obtain By using Eqs.(3.5),(3.6)and (3.7), we get

.9)
Some Novel Newton-Type Methods for Solving Nonlinear Equations 117 From the second step of the Method I and Eqs.(3.8), (3.9) we obtain Proof: From the first step of the Method II and Eq.(2.14), we have 3 )e 6 n + O(e 7 n ). (3.17)

✷
Per iteration Method I and Method II requires two evaluations of the function, two of its first derivative and one of its second derivative.Therefore, the efficiency index is 6

Dynamical behavior
The set of initial conditions leading to long-time behavior that approaches the attractor(s) of a dynamical system is defined as basin of attraction [16,17].To study dynamical behavior, we analyze the basins of attraction of our methods on the polynomial f (z) = z 3 − 1, which has simple zeros {1, 0.5 ± 0.866025i}.Toward this end, we take a rectangle D = [−4, 4] × [−4, 4] ∈ C with a 400 × 400 grid.In Figure 1, we have presented the basins for the Newton's method (1.1) and new nonstandard Newton's method (2.8).Also, the basins for Method I and Method II are plotted in Figure 2. Finally, graphical presentations of the number of iterations for our methods to converge to one of the roots of f (z) are shown in Figures 3 and  4. For more technical details in obtaining these figures, the interested reader can see [17,18].

Numerical results
In this section, we present some numerical examples to show the performance of the developed methods.Also, we compare our numerical results with those obtained by various iterative methods earlier in the literature.For this purpose we consider the following test functions [2,8,6,19,20] in our experiments.Furthermore, we calculate the number of iterations (IT) and the computational order of convergence (COC) approximated (see [2]) by means of .
In Table 1, we compare scheme (2.8) with the results obtained by using harmonic mean Newton's method (HNM) [21], Super-Halley method (SHM) [2] and modified homotopy perturbation method (MHPM) [4, algorithm 2.1].All these methods have order three and use three functional evaluations per step.So, their efficiency index is the same.Also, the computing results for Method I and Method II are given in Table 2.According to Tables 1 and 2, we find that the presented methods produce satisfactory results and also the computational order of convergence confirms the theoretical results.

Conclusion
In this paper, the idea behind in nonstandard finite difference method is used to develop a new Newton-type iterative method for solving nonlinear equations.The convergence analysis shows that this method is cubically convergent.This obtained method was also compared with other third-order methods via numerical examples.Also, we developed two new multi-point methods with sixth-order convergence.Furthermore, to study dynamical behavior, we analyze the basin of attractions of our methods on the polynomial f (z) = z 3 − 1.Finally, numerical results show that the new methods can be of practical interest and the computational order of convergence confirms the theoretical results.

Figure 2 :
Figure 2: Basins of attraction for f (z) = z 3 − 1 for the Method I (left) and Method II (right).

Figure 3 :
Figure 3: Number of iterations needed by Newton's method (left) and new nonstandard Newton's method (right) to converge to one of the roots of f (z) = z 3 − 1 as a function of the initial condition.

Figure 4 :
Figure 4: Number of iterations needed by Method I (left) and Method II (right) to converge to one of the roots of f (z) = z 3 − 1 as a function of the initial condition.
Under the hypothesis of Theorem(2.1),the order of convergence of the method defined by (3.2) is six and satisfies the following error equation:

Table 1 :
The comparison of the scheme (2.8) with different methods and several test functions.

Table 2 :
The computing results for Method I and Method II.