Automated slicing scheme for test case prioritization in regression testing

Authors

  • Manju Kaushal "Vellore Institute of Technology, Vellore"
  • Satheesh Abimannan "Vellore Institute of Technology,Vellore"

DOI:

https://doi.org/10.14419/ijet.v7i4.28037

Published:

2019-03-22

Keywords:

Automated Slicing, Regression Testing, Test Case Prioritization.

Abstract

Motivation: The testing approach which ensures that the software does not have any adverse affects due to the changes made in the existing features or addition of some new features is called regression testing. For testing the changes made in the previous versions of software, this type of testing is performed. To ensure that the numbers of test cases available in the software is not too large, it is important to select the regression tests and to do so, several techniques have been designed. To detect the individual functions from the software, the existing work applied the slicing technique. The parameters which are used in this approach are calculated manually to analyze importance of individual functions. The number of times the function is encountered and the number of functions relevant to the specific function are the two different parameters calculated here. A list of changes in the source code and the execution traces generated from the test cases which are run on previous versions are used to combine the modification, minimization and prioritization-based selection which thus generates a hybrid technique.

Problem Statement: In the existing system, the manual slicing technique is applied to perform test case prioritization. In manual slicing, the total number of times a function is triggered and the total numbers of functions attached are calculated manually to generate final function importance. This approach is very time consuming and inaccurate.

Method: In this paper, we studied that to prioritize the test cases based on the changes, a type of regression testing is used which is test case prioritization. The test cases of the functions which have higher priority are executed first and so on. Based on the changes made, the test cases are prioritized. For identifying maximum number of faults from the modified software, manual slicing and automated slicing are applied in this work. The proposed method will be the enhancement of manual slicing technique. The automated slicing technique will automatically calculate the functional importance based on number of attached functions and number of times function triggered. The proposed method has low execution time and detects more number of defects from the software. The dataset of ten different projects is used to test the performances of proposed and existing algorithms in MATLAB. Each project has seven functions and four numbers of changes are defined for the regression testing.

Results: The simulation results achieved at the end show that in comparison to manual methods, the implementation of automated test case prioritization has provided improvement in the fault detection rate and reduction in the execution time.

 

 

 

References

[1] Zheng Li, Mark Harman, and Robert M. Hierons, “Search algorithm for Regression Test Case Prioritization,†IEEE Transactions on Software Engineering, Vol. 33, No.4, April 2007. https://doi.org/10.1109/TSE.2007.38.

[2] Dennis Jeffrey and Neelam Gupta, “Improving Fault Detection Capability by Selectively Retaining Test Cases during Test Suite Reduction,†IEEE Transactions on software Engineering, VOL. 33 NO.2, February 2007.

[3] Jennifer Black, Emanuel Melachrinoudis and David Kaeli, “Bi Criteria Models for All uses Test Suite-Reduction,†26th International Conference on Software Engineering (ICSE’04).

[4] Wes Masri, Andy Podgurski and David Leon, “An EmpricalStudey of Test Case Filtering Techniques Based on Exercising Information Flows,†IEEE Transactions on software Engineering, VOL. 33, NO.7, February 2007.

[5] Scott McMaster, Atif M. Memon, “Call Stack Coverage for GUI Test Suite Rdduction,†IEEE Transactions on software Engineering, VOL. 34 NO.1, January/February 2008.

[6] Maruan Khoury, “Cost Effective Regression Testing,†October 5, 2006.

[7] Alexey G. Malishevsy, Gregg Rothermel, Sebastian Elbaum,â€Modeling the Cost-Benefits Tradeoffs for Regression Testing Techniques,†Proceedings of the International Conference on Software Maintenance ICSM’02), 2002 IEEE.

[8] Sebastian Elbaum, Alexey G. Malishevsky and Gregg Rothermel, “Test Case Prioritization: A Family of Emprical Studies,†IEEE Transactions on software Engineering, VOL. 28, NO.2, February 2002.

[9] Gregg Rothermel, Roland H. Untch, Chentun Chu and Mary Jean Harrold, “Prioritizing Test Cases for Regression Testing,†IEEE Transactions on software Engineering, VOL. 27 NO.10, October 2001.

[10] Hema Srikanth, Laurie Williams and Jason Osborne, “System Test Case Prioritization of New and Regression Test Casesâ€, In Proceedings of the 4th International Symposium on Empirical Software Engineering (ISESE), pages 62–71. IEEE Computer Society, 2005. https://doi.org/10.1109/ISESE.2005.1541815.

[11] Hyunsook Do and Gregg Rothermel, “On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniquesâ€, IEEE Transactions on Software Engineering, V. 32, No. 9, pages 733- 752, 2006. https://doi.org/10.1109/TSE.2006.92.

[12] K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma, “Regression testing in an industrial environmentâ€, Comm. Of the ACM, 41(5):81–86, 1988.

[13] K. R. Walcott, M. L. Soffa, G. M. Kapfhammer and R. S. Roos, “Time-Aware Test Suite Prioritizationâ€, In Proceedings of the International Symposium on Software testing and Analysis, pages 1-12, 2006

[14] Londesbrough, I., “Test Process for all Lifecyclesâ€, IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW’08, 2008. https://doi.org/10.1109/ICSTW.2008.4.

[15] Lu Luo, “Software Testing Techniques: Technology Maturation and Research Strategiesâ€, Carnegie Mellon University, USA, 1999.

[16] Sreedevi Sampath, Sara Sprenkle, Emily Gibson and Lori Pollock, “Web Application Testing with Customized Test Requirements – An Experimental Comparison Studyâ€, 17th International Symposium on Software Reliability Engineering (ISSRE’06), 2006.

[17] Xiaofang Zhang, Baowen Xu, ChanghaiNie and Liang Shi, “An Approach for Optimizing Test Suite Based on Testing Requirement Reductionâ€, Journal of Software (in Chinese), 18(4): 821-831, 2007. https://doi.org/10.1360/jos180821.

[18] Jennifer Black, Emanuel Melachrinoudis and David Kaeli, “Bi-Criteria Models for All-Uses Test Suite Reductionâ€, Proceedings of the 26th International Conference on Software Engineering (ICSE’04), 2004. https://doi.org/10.1109/ICSE.2004.1317433.

[19] Jung-Min Kim, Adam Porter and Gregg Rothermel, “An Empirical Study of Regression Test Application Frequencyâ€, ICSE2000, 2000.

[20] Jung-Min Kim and Adam Porter, “A History-Based Test Prioritization Technique for Regression Testing in Resource Constrained Environmentsâ€, In Proceedings of the International Conference on Software Engineering (ICSE), pages 119–129. ACM Press, 2002.

[21] Dipesh Pradhan, Shuai Wang, Shaukat Ali, Tao Yue, Marius Liaaen, “REMAP: Using Rule Mining and Multi-Objective Search for Dynamic Test Case Prioritizationâ€, 2018 IEEE 11th International Conference on Software Testing, Verification and Validation.

[22] Paruchuri Ramya, VemuriSindhura, Dr. P. Vidya Sagar, “CLUSTERING BASED PRIORITIZATION OF TEST CASESâ€, Proceedings of the 2nd International Conference on Inventive Communication and Computational Technologies (ICICCT 2018).

[23] Yijie Ren, Bei-Bei Yin, Bin Wang, “Test Case Prioritization for GUI Regression Testing based on Centrality Measuresâ€, 2018 42nd IEEE International Conference on Computer Software & Applications. https://doi.org/10.1109/COMPSAC.2018.10275.

[24] TomášPospíšil, JiíNovák, “New Similarity Function for Test Case Prioritization in Model-Based Contextâ€, 2018 16th Biennial Baltic Electronics Conference (BEC).

[25] Qi Luo, Kevin Moran, Denys Poshyvanyk, Massimiliano Di Penta, “Assessing Test Case Prioritization on Real Faults and Mutantsâ€, 2018 IEEE International Conference on Software Maintenance and Evolution. https://doi.org/10.1109/ICSME.2018.00033.

Maral Azizi, Hyunsook Do, “Graphite: A Greedy Graph-Based Technique for Regression Test Case Prioritizationâ€, 2018 IEEE International Symposium on Software Reliability Engineering Workshops. https://doi.org/10.1109/ISSREW.2018.00014.

View Full Article: