چكيده به لاتين
The climate research and forecasting model is a highly computational, I/O, and network simulation system, and running it on the fastest computers is a major challenge. Optimal and timely implementation is the concern of many meteorological organizations and institutions. The management and implementation of high-performance computing applications is mainly performed based on cluster and cloud computing infrastructures due to the abundance of computational resources that enables them to process large amounts of data in parallel. These infrastructures typically use virtualization to manage access to resources. This will cause additional overhead to the cluster.
The use of lightweight virtualization, while creating minimal overhead, brings the benefits of unlimited access to resources in the cloud infrastructure.
To prove the efficiency of the framework in question, a comparative study is carried out between the containers in this project, namely Docker, Singularity, and virtualization under the KVM to running WRF model. This study shows that container-based lightweight virtualization features reduce the overall implementation time of the WRF model as a scientific application requiring high-performance computing, compared to hypervisor-based virtualization.
Estimating the timing of the meteorological model implementation on a software and hardware platform and at different geographic scales is important for meteorological organizations and organizations to be able to analyze and report the model output at the desired timing.
In this study, the variables that influence the speed of implementation of the WRF meteorological model were identified. Then, by selecting a network interval as the reference with the number of specified network points and 4 network intervals with the number of network points that are the right multiple of the reference network points and executing them using the WRF model, the run time of a network point is obtained. Using numerical simulation, for the first time, a mathematical relationship from the time the model was run at different scales was proposed. The above equation calculates the run time of the model based on the number of network points, network distance, model simulation time, and number of CPUs used in the hardware, calculates the run time of the model in minutes.