چكيده به لاتين
Obtaining the roots of equations, especially nonlinear equations, is one of the most important topics in engineering and basic sciences. Because many natural problems can be modeled as nonlinear equations, and rooting them can solve many of the problems of the world around us. For this reason, many researchers have studied this problem for several years. The two-part method is one of the most important methods in numerical calculations to find the root of a continuous function, which we know has a different sign at two points. Repeating this method on the functions with the mentioned properties will take us to the root if there are no symbols in the range. More precisely, first the value of the function is calculated in the middle of the given interval, then from the two created intervals, a interval is selected that the value of the function is not marked at the beginning and the end of the new interval. In this case, due to the continuity of the function, we are sure that the root will be in this interval. Now for this selected interval we repeat the algorithm again and get closer to the root. This method is one of the simplest ways to find the root of a function in numerical calculations. It has simple calculations and does not require difficult derivative and integral calculations. On the other hand, in addition to simplicity, this method is slow and responds later than other proposed methods to find the root of a function. The two-part method, sometimes called the composition method, has similarities to the binary search algorithm in computer science. The simplest algorithm for finding the roots of equations is the two-part method or halving. This algorithm can be applied to any continuous function f (x) in the interval [a, b] where the value of the function f (x) changes from a to b. The process of the two-part method is simple: we divide the interval into two parts, and there must be an answer in an substructure in which the sign f (x) changes. We repeat the mentioned process. Meta-heuristic or meta-heuristic or meta-heuristic algorithms are a type of random algorithms that are used to find the optimal answer. Optimization methods and algorithms are divided into two categories: exact algorithms and approximate algorithms. Accurate algorithms are able to find the optimal answer accurately, but they are not efficient enough in the case of hard optimization problems, and their execution time increases exponentially according to the dimensions of the problems. Approximate algorithms are able to find good (near-optimal) answers in a short time for difficult optimization problems. Approximate algorithms are also divided into three categories: innovative, meta-innovative and super-innovative algorithms. The two main problems with innovative algorithms are that they get stuck at optimal local points, premature convergence to these points. Innovative Algorithms Innovative algorithms have been proposed to solve these problems. In fact, meta-heuristic algorithms are one of the types of approximation optimization algorithms that have solutions for exiting local optimal points and can be used in a wide range of problems. Various classes of this type of algorithm have been developed in recent decades, all of which are subsets of meta-heuristic algorithms. Various criteria can be used to classify meta-heuristic algorithms: Answer-based and population-based: One-answer-based algorithms change an answer during the search process, while population-based algorithms consider a population of answers during the search. Nature-inspired and non-nature-inspired: Many meta-heuristic algorithms are nature-inspired, and some meta-heuristic algorithms are not nature-inspired. With memory and without memory: Some meta-heuristic algorithms lack memory, meaning that these types of algorithms do not use the information obtained during the search. However, some metaheuristic algorithms, such as forbidden search, use memory. This memory stores information obtained during the search. Definitive and probabilistic: A definite meta-heuristic algorithm such as forbidden search solves the problem using definite decisions. But in probabilistic metaheuristic algorithms such as simulated refrigeration, a series of probabilistic rules are used during the search. In mathematics, a fixed point of a function is a point that is mapped to itself by the function. A set of fixed points is called a fixed set. In simpler terms, c is a fixed point on the function f (x) if and only if f (c) = c. A fixed point means that the point (x, f (x)) is on the line x = y, or in other coordinates it is on the line that represents the line x = y in those coordinates. For example in the following function defined on a set of real numbers, f(x)=x^2-3x+4 2 is a fixed point, because f (2) = 2. Not all functions have a fixed point. For example function f(x)=x+1 For no point in equality x=x+1 does not apply. In mathematics, a fixed-point theorem is a theorem that states that if some of the conditions are met, it can be ensured that the function F has at least one fixed point, such as x. A fixed point is a point at which f (x) = x. How to use the method to solve transactions will be as follows. Form the equation into f (x) = x. Put the desired number instead of x in f (x). For example, k Put the value f (k) again in f (x) instead of x. We will perform the above operation indefinitely and we will be closer to the answer. In the present dissertation, we propose two new iterative methods that It uses the advantages of both flame and impeller optimization algorithms and the two-part method in one method and the multi-world optimization algorithm and the two-part method in another method to solve the fixed point problem. The remaining sections of the present dissertation are organized as follows: In the section. 2, a brief overview of the two-part method is performed and the flame and impeller optimization algorithm and the fixed point problem are explained. Also, two proposed methods are described in Section 3. Section 4 We use these proposed algorithms to find the fixed point of 16 different functions and finally compare the proposed algorithms with other meta-heuristic algorithms such as ALO, MVO, MFO, GWO, SCA, SSA, WOA. Also, in Section 5, conclusions are given. The results show that the proposed algorithms perform better than other meta-heuristic algorithms in finding fixed points of different functions. To better understand the topics and more easily compare the algorithms, the diagrams of each function and each algorithm obtained using that fixed point of that function are drawn separately and in combination.