Over the past few years, the field of software testing has been revolutionized by significant strides made in automated test case generation (ATCG). This technique, instrumental in streamlining and optimizing the process of software testing, has become a linchpin of modern software development practices. Traditionally, the creation of test cases was a manual, labor-intensive process, often taking up a large portion of the software development cycle. The advent of ATCG has reduced human effort, brought speed and consistency, and minimized the risk of missing important test scenarios. Through ATCG, we've seen not just an improvement in efficiency, but also in the overall effectiveness and reliability of testing processes. One of the most prominent recent trends in ATCG is the use of AI and machine learning techniques. With the incorporation of search-based software testing (SBST), testers are now able to generate test cases that can cover rare edge cases, which were hard to identify manually. Furthermore, the introduction of AI has brought a new dimension of predictive analysis, helping prioritize test scenarios based on bug probability.
One of the major problems in this field is the evaluation and comparison of the different approaches.
The Goal of this work is to create a form of benchmark testing environment to evaluate such machine generated sets of test cases and their coverage of the input space.
The core of this project will be a completely customizable driving AI.
The AI should: