چكيده به لاتين
Today, with the implementation of the new generation of communication networks, we are witnessing a great change in the increasing development of the Internet of Things and the emergence of new programs and applications in this context. Limitations in the size, computing power and energy of devices connected to the IoT platform are among the major challenges in this platform that make the devices unable to run programs with high computational load and low latency. Computational offloading methods in multi-access edge computing, by providing computing resources and storage near users, is an effective solution to address these challenges. However, due to user mobility and changes in the specification of applications among time, the issue of assigning edge servers to users to reduce latency faces challenges. Current mobility-aware approaches to mobility in this area use often simple mobility models that cannot detect and use user mobility patterns. Also, by performing a large-scale load offloading, the entire program is offloaded at the edge. In this way, in case of user mobility and the need to migrate the offloaded computation, due to repeated migration of a program, we see an increase in overhead and latency in the network, reduced efficiency and also reactivation of migration due to factors such as balancing the load on the destination server.
In this study, in order to deal with frequent migration and its disadvantages, a fine-grained computation offloading method is presented. In this method, each user has a program graph that consists of a number of components and data invocation between components. By assigning each component from one user to the server in proportion to the entire program, in the event of migration due to user mobility, we see a reduction in communication overhead, latency, and increased efficiency. Also, by predicting the user’s location and the variable characteristics of the offloaded components over time using machine learning capabilities, we achieve the optimal offloading decision in order to achieve the objective function of minimizing the total offloading delay, including offload, processing and migration times for all users. . According to the results of the evaluation, it can be seen that the proposed method, in terms of the purpose of the problem, ie reducing the total offloading time compared to the related method, has improved by an average of 12.2 Percentage in all experiments. This reduction in time has been due to the model of predicting appropriate mobility along with fine-grain offloading and exploiting its benefits to reduce migration and the resulting overhead.