چكيده به لاتين
Abstract When it comes to coding, mistakes are just part of the game. And let's be real, they can cause some serious problems - like weird program behavior, extra time and money spent on fixing things, and even financial losses. So, finding out where the error is is crucial to getting things back on track. The more accurate you are, the faster and more efficient the fix will be. There are a bunch of ways to find those pesky errors, each with its own pros and cons. Statistical methods are popular because they can help rank program elements. But, let's face it, they're not foolproof. This thesis is all about exploring the good and bad of error-finding methods and trying to come up with a better way to do it. One major issue with statistical methods is that they can get thrown off by loops in the code. When the same code gets executed over and over, it can make the method less accurate. Another problem is that these methods don't always account for successful runs. Sometimes, a program will work just fine without any issues, which can mess with the stats and make it harder to find errors.To overcome these challenges, we can look at how different execution paths behave during successful and unsuccessful tests. Then, using joint entropy and clustering algorithms, we can identify those successful runs and try to improve our methods for finding errors. Additionally, clustering executions can make statistical methods more targeted and improve their accuracy in error localization. The following discussion delves into error localization's advantages, disadvantages, and challenges. Our goal is to find the most effective solution for software error localization and to propose innovative and creative methods to address the existing challenges in this field. Keywords: Software testing, Fault localization, Coincidentally Correct test cases, Clustering of test cases, Cross Entropy