چكيده به لاتين
Increasing the use of smart devices leads to the era of big data. To extract the required knowledge from this data, we need to perform various processes; but using the usual hardware and software tools is not suitable for this volume of processing. Cloud computing provides many reliable, remote and low-cost resources for users. Hence, cloud resources can be the right choice for Internet of Things objects for storing and processing big data. But, since IoT applications are sensitive to delays and require mobility and location-aware support, cloud computing cannot satisfy these needs. To overcome these challenges, a developed architecture, called fog computing, has been introduced, where resources are transferred to the edge of the network. One of the difficulties raised in fog computing is the issue of allocating tasks to nodes with reduced user latency.
In this study, to reduce latency and increase user satisfaction, since the capacity of nodes is limited, we intend to use computational resources of other nodes to complete the remaining tasks and if the capacity of all nodes is completed, fog nodes transfer the task to the cloud by accepting more delay. Considering the problem of selecting the appropriate node and the number of resources available for each of the remaining tasks is high complexity, different heuristics and metaheuristics have been used to solve this problem. But these methods only focus on the resources needed for a task at each stage, regardless of the requirements of the task set, and thus will not be able to allocate resources optimally. Also, in most cases, tasks are allocated as much as they require. However, we can allocate more resources if optimally allocated. To solve this challenge, we try to find a solution using the Monte Carlo Tree Search, which, in addition to accepting the maximum number of tasks, allocates as many resources as possible to each of them, so that we can speed up the processing increase and therefore reduce latency.
According to the results of the simulation, we find that in different conditions, the proposed method allocates resources 50% to 80% of the requests in the fog layer which is 2 to 5 times higher than the other method in accepting the request in the fog layer. Also, 80% of these requests receive twice as much as their requested source. Since the execution of these tasks in the fog layer and with more resources is less delayed than in the cloud layer, we have also been able to reduce the total delay.