چكيده به لاتين
According to the rapid growth of emerging technologies such as the Metaverse and Digital Twin, the need for effective integration between communication, computing, and data storage in complex systems and edge computing environments has become increasingly critical. Such systems demand advanced and intelligent methods for managing and allocating resources due to their intricate interactions and strong dependency on environmental conditions. Among the main challenges of these systems are ensuring reliability, reducing end-to-end latency, and optimizing energy consumption for time-sensitive applications such as Augmented Reality and Internet of Things systems. In this regard, the integration of edge computing and reinforcement learning has been proposed as a promising approach for optimal resource allocation and network task management in these complex environments.
In this research, a new model has been introduced that leverages reinforcement learning techniques such as Deep Q-Network (DQN), Double Q-Network (DDQN), and Dueling Double Q-Network (Dueling DQN) to optimize communication, computing, and storage resources in network systems. Unlike traditional optimization techniques, these methods can learn from the environment, adapt to changes, and make intelligent decisions even in complex and large-scale scenarios. The proposed model is designed to minimize end-to-end latency while ensuring users' quality of experience and reliability, thus overcoming the limitations of existing methods in handling high-dimensional state and action spaces.
The simulation results indicate that the proposed method, compared to traditional methods and conventional optimization algorithms, successfully reduces latency, energy consumption, and improves resource utilization efficiency. Moreover, the use of the Digital Twin concept in this model enhances the accuracy of network parameter predictions and provides better synchronization between the physical and virtual models, which results in improved stability and overall network performance. Consequently, the proposed framework offers high reliability and efficiency for Metaverse and 6G system applications, and it demonstrates high scalability and adaptability to various network scenarios, including industrial IoT applications, smart cities, and vehicular networks, where dynamic task offloading and resource management are of utmost importance.