Zum Hauptinhalt springen

Software Test Measurements

  1. Abu, G., Cangussu, J. W., and Turi, J.: A quantitative learning model for software test process>/i>. In HICSS ’05: Proceedings of the 38th Annual Hawaii International Conference on System Sciences - Track 3, 2005, page 78.2,Washington, DC, USA. IEEE Computer Society.
  2. Adler, M. and Gray, M. A. (1983): A formalization of Myers causeeffect graphs for unit testing. SIGSOFT Softw. Eng. Notes, 8(5):24–32.
  3. Agrawal, H.: Efficient Coverage Testing Using Global Dominator Graphs. SE Notes, 24(1999)5, pp. 11-20
  4. Alagar, V. S.; Ormandjieva, O.: Testing Measurement in Real-Time Reactive Systems. Proc. of the ESCOM-SCOPE 2000, April 2000, Munich, Germany, Shaker Publ., pp. 487-495
  5. Arlt, R.: Management of the failure correction process. Proc. of the CONQUEST 2001, Nuremberg, Germany, September 2001, pp. 41-50
  6. Astels, D. (2003). Test Driven development: A Practical Guide. Prentice Hall Professional Technical Reference.
  7. Avestisyan, A. I. et al.: Open Cirrus: A Global Cloud Comuting Testbed. IEE Computer, 43(2010)4, pp. 35-43
  8. Bache, R.; Muellerburg, M.: Measures of testability as a basis for quality assurance. Software Engineering Journal, March 1990, pp. 85-92
  9. Bainbridge, J.: Defining Testability Metrics Axiomatically. Software Testing, Verification and Reliability, 4(1994), pp. 63-80
  10. Basanieri, F.; Bertolino, A.; Marchetti, E.: CoWTeST: A Cost Weighted Test Strategy. Proc. of the ESCOM 2001, April 2001, London, pp. 387-396
  11. Basili, V.R.; Selby, R.W.: Comparing the Effectiveness of Software Testing Strategies. IEEE Transactions on Software Engineering, 13(1987)12, pp. 1278-1296
  12. Bastani, F.B.; DiMarco, G.; Pasquini, A.: Experimental Evaluation of a Fuzzy-Set Based Measure of Software Correctness Using Program Mutation. Proceedings of the 15th International Conference on Software Engineering, May 17-21, Baltimore, 1993, pp. 45-54
  13. Baudry, B.; Traon, Y. Le; Sunye, G.: Testability Analysis of a UML Class Diagram. Proc. of the Eight IEEE Symposium on Software Metrics (METRICS 2002), June 4-7, 2002, Ottawa, Canada, pp. 54-65
  14. Beizer, B. (1995). Black-box testing: techniques for functional testing of software and systems. John Wiley & Sons, Inc., New York, NY, USA.
  15. Bertolino, A. (2007). Software testing research: Achievements, challenges, dreams. In FOSE ’07: 2007 Future of Software Engineering, pages 85–103, Washington, DC, USA. IEEE Computer Society.
  16. Bochmann; G.v. et al.: Summary of Discussion on 'Design for Testatbility'. Montreal, July, 1992
  17. Boehm, B.; Basili, V. R.: Software Defect Reduction Top 10 List. IEEE Computer, January 2001, pp. 135-137
  18. Briand, L.C.; Basili, V.R.; Hetmanski, C.J.: Providing an Empirical Basis for Optimizing the Verification and Testing Phases of Software Development. Proceedings of the Third International Symposium on Software Reliability Engineering, Research Triangle Park, NC, October 8-9, 1992, pp. 329-338
  19. Briand, L. and Labiche, Y. (2004). Empirical studies of software testing techniques: challenges, practical strategies, and future research. SIGSOFT Softw. Eng. Notes, 29(5):1–3.
  20. Broekman, B.: Estimating testing effort, using test point analysis (TPA).Proc. of the FESMA'98, Antwerp, Belgium, May 6-8, 1998, pp. 323-338
  21. Broekman, B. and Notenboom, E. (2003). Testing Embedded Software. Addison-Wesley, Great Britain
  22. Burnstein, I. (2003). Practical Software Testing: A Process-oriented Approach. Springer Inc., New York, NY, USA
  23. Cai, X.; Lyu, M.R.: Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project. The 18th IEEE International Symposium onSoftware Reliability, 2007. ISSRE '07, pp. 17-26
  24. Canfora, G., Cimitile, A., Garcia, F., Piattini, M., and Visaggio, C. A.(2006). Evaluating advantages of test driven development: a controlled experiment with professionals. In ISESE ’06: Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering, pages 364–371, New York, NY, USA. ACM.
  25. Cangusse, J. W.; DeCarlo, R. A.; Mathur, A. P.: Monitoring the Software Test Process Using Statistical Process Coontrol: A Logarithmic Approach. Software Engineering Notes, 28(2003)5, pp. 158-167
  26. Carr, G.: Independent Testing at the Prudential. First European International Conference on Software Testing, Analysis & Review (EuroStar), London, October 25-28, 1993, pp. 37-42
  27. Carbno, C. C.: Using Calibrated Zipf Capture-Recapture for Estimating the Defects Remaining. In: Dumke/Abran: Current Trends in Software Measurement, Shaker Publ., Aachen, Germany, 2001, pp. 143-152
  28. Chang, J.; Richardson, D. J.: Structural Specification-Based Testing: Automatd Support and Experimental Evaluation. Proc. of the ESEC/FSE'99, Toulouse, France, September 1999, pp. 285-302
  29. Chen, Y., Probert, R. L., and Robeson, K.: Effective test metrics for test strategy evolution. In CASCON ’04: Proceedings of the 2004 conference of the Centre for Advanced Studies on Collaborative research, 2004, pages 111–123. IBM Press.
  30. Chernak, Y. (2004). Introducing TPAM: Test process assessment model. Crosstalk-The Journal of Defense Software Engineering.
  31. Chung, C.M.; Shih, T.K.; Wang,C.C.: Integration object-oriented software testing and metrics. International Journal of Software Engineering and Knowledge Engineering, 7(1997)1, pp. 125-144
  32. Chusho, T.: Test Data Selection and Quality Estimation Based on the Concept of Essential Branches for Path Testing. IEEE Transactions on Software Engineering, 13(1987)5, pp. 509-517
  33. Emerson, T.J.: Program Testing, Path Coverage, and the Cohesion Metric. IEEE COMPSAC, 1984, pp. 421-431
  34. Davey, S. et. al.: Metrics Collection in Code and Unit Test as Part of Continous Quality Improvement. First European International Conference on Software Testing, Analysis & Review (EuroStar), London, October 25-28, 1993, pp. 37-42
  35. Debbarma, M.K.; Kar, N.; Saha, A.: Static and dynamic software metrics complexity analysis in regression testing. International Conference on Computer Communication and Informatics (ICCCI), 2012, pp. 1-6
  36. Drabick, R. D.: Best Practices for the Formal Software Testing Process:A Menu of Testing Tasks. Dorset House, 2003
  37. Ebert, Ch.; Liedtke, T.: Experience in complexity-based error estimation. (German) in: Dumke/Zuse: Theorie und Praxis der Softwaremessung, Deutscher Universitaetsverlag, Wiesbaden, 1994, pp.78-96
  38. Eickelmann, N.: Measuring and evaluating the software test process. Proc. of the FESMA'98, Antwerp, Belgium, May 6-8, 1998, pp. 339-346
  39. Elbaum, S.; Gable, D.; Rothermel, G.: Understanding and Measuring the Source of Variation in the Priorization of Regression Test Suites. Pro. of the Seventh International Software Metrics Symposium METRICS 2001, April 2001, London, pp. 169-180
  40. Elbaum, S.; Malishevski, A.G.; Rothermal, G.: Test Case Priorization: A Family of Empirical Studies. IEEE Transaction on Software Engineering, 28(2002)2, pp. 146-158
  41. Eldh, S., Hansson, H., Punnekkat, S., Pettersson, A., and Sundmark, D.: A framework for comparing efficiency, effectiveness and applicability of software testing techniques. In TAIC-PART ’06: Proceedings of the Testing: Academic & Industrial Conference on Practice And Research Techniques, 2006, pages 159–170, Washington, DC, USA. IEEE Computer Society.
  42. El-Far, I. K. and Whittaker, J. A.: Model-based software testing. Encyclopedia on Software Engineering, 2001
  43. Eski, S.; Buzluca, F.: An Empirical Study on Object-Oriented Metrics and Software Evolution in Order to Reduce Testing Costs by Predicting Change-Prone Classes. IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 2011, pp. 566 - 571
  44. Evanco, W.M.: Ordered Response Models for the Analysis of Software Fault Correction Effort. Proceedings of the Annual Oregon Workshop on Software Metrics, April 10-12, 1994, Silver Falls, Oregon
  45. Farooq, A.: An Evaluation Framework for Software Test Processes. University of Magdeburg, Dept. of Computer Science, PhD, 2009
  46. Farooq, A.; Dumke, R.R.: A Critical Analysis of Testing Maturity Model. Metrics News, Journal of GI-Interest Group on Software Metrics, 12(2007)1, Februar 2007, S. 35-40
  47. Farooq, A., Dumke, R. R., Hegewald, H., and Wille, C.: Structuring test process metrics. In MetriKon 2007: Proceedings of the DASMA Software Metrik Kongress,2007, pages 95–102, Aachen, Germany. Shaker Verlag.
  48. Farooq, A., Dumke, R. R., Schmietendorf, A., and Hegewald, H.: A classification scheme for test process metrics. In SEETEST 2008: South East European Software Testing Conference, Heidelberg, Germany. dpunkt.verlag 
  49. Farooq, A.; Georgieva, K.; Dumke, R.R.: A Meta-Measurement Approach for Software Test Processes. Proceedings of the 12th IEEE - International Multitopic Conference (IEEE INMIC 2008), December, 23-24, 2008, Karachi, Pakistan, S. 333-338, Bahria University, Karachi Campus & IEEE-Karachi Sectio
  50. Farooq, A.; Georgieva, K.; Dumke, R. R.: Challenges in Evaluating SOA Test Processes. In: Dumke/Braungarten/Büren/Abran/Cuadrado-Gallego: Software Process und Product Measurement. LNCS 5338,Springer-Verlag, 2008, S. 107-11
  51. Farooq, A.; Georgieva, K.; Schmietendorf, A.; Dumke, R.R.: A Systematic Method for Identifying Testing Project Risks. Proceedings of the International Conference on Quality Engineering in Software Technology (CONQUEST 2010), 20-22 September, 2010, Dresden, Germany, CD-ROM
  52. Fehlmann, T.; Kranich, E.: Measuring Software tests with COSMIC. In: Büren et al.: MetriKon 2013 – Praxis der Software-Messung, Shaker Verlag, Aachen, 2013
  53. Fink, G.; Bishop, M.: Property-Based Testing: A New Approach to Testing for Assurance. Software Engineering Notes, (1997)4, July, pp. 74-80
  54. Georgieva, K.: Testing of Aspect Oriented Programs - Testing Aspect Oriented Programs as Object Oriented Ones. Lambert Academic Publishing AG & Co. KG Saarbrücken, 2010
  55. Georgieva, K.; Farooq, A.; Dumke R.R.: A Risk Taxonomy for the Software Testing Process. In: G. Büren, R.R. Dumke: Praxis der Software-Messung - Tagungsband des DASMA Software Metrik Kongresses (MetriKon 2009), Kaiserslautern, Shaker Verlag Aachen, 2009, S. 247-260
  56. Grottke, M.; Dussa-Ziegler, K.: Systematic vs. Operational Testing. Proc. of the CONQUEST 2001, Nuremberg, Germany, September 2001, pp. 59-68
  57. Gutjahr, W. J. (1999). Partition testing vs. random testing: The influence of uncertainty. IEEE Trans. Softw. Eng., 25(5):661–674
  58. Harris, I. G. (2006). A coverage metric for the validation of interacting processes. In DATE ’06: Proceedings of the conference on Design, automation and test in Europe, pages 1019–1024, 3001 Leuven, Belgium, Belgium. European Design and Automation Association
  59. Harrison, W.: Using Software Metrics to Allocate Testing Resources. Journal of Management Information Systems, 4(1988)4, pp. 93-105
  60. Horgan, J.R.; London, S.; Lyu, M.R.: Achieving Software Quality with Testing Coverage Measures. IEEE Computer, September 1994, pp. 60-68
  61. Humayun, S.; Soomro, M.H.: Development of a test bed for monitoring & control software of a ground station & its analysis by application of standard software metrics. International Conference on Aerospace Science & Engineering (ICASE), 2013, pp. 1-5
  62. Hutcheson, M. L. (2003). Software Testing Fundamentals: Methods and Metrics. John Wiley & Sons, Inc., New York, NY, USA
  63. IEEE Standard for Software Test Documentation. IEEE Std 829-1983, New York, February 1983
  64. Jianping Fu; Minyan Lu: Request-Oriented Method of Software Testability Measurement. International Conference onInformation Technology and Computer Science, 2009. ITCS 2009. pp. 77-80
  65. Jungmayr, S.: Testability Measurement and Software Dependencies. Proc. of the 12th International Workshop on Software Measurement, October 7-9, 2002, Magdeburg, Shaker Publ., Aachen, pp. 179-202
  66. ] Kan, S. H., Parrish, J., and Manlove, D. (2001). In-process metrics for software testing. IBM Systems Journal, 40(1):220–241
  67. Kapfhammer, G. M.; Soffa, M. L.: A Family of Test Adequacy Criteria for Database-Driven Applications. Software Engineering Notes, 28(2003)5, pp. 98-107
  68. Katkov, V.L.; Shimarov, V.A.: A Quantitative Approach to Program Testing Quality Control. Proceedings of the Second International Conference on Software Quality, Research Triangle Park, NC, October 3-5, 1992, pp. 215-222
  69. Katkov, V.L.; Shimarov, V.A.: Structural Software Testing Criteria Evaluation. Proceedings of the International Conference on CAD/CAM, Robotics and Practice of the Future, St. Petersburg, 1993, pp. 558-565
  70. Keese, P.; Meyerhoff, D.: Experiences from Business Integration Testing. Proc. of the CONQUEST 2001, Nuremberg, Germany, September 2001, pp. 51-58
  71. Khoshgoftaar, T.M.; Allen, E.B.; Gramont, A.: Identification Change-Prone Telecommunications Software Modules During Testing and Maintenance.Proc. of the AOWSM, June 5-7, 1995, Oregon, Section 9
  72. Koshgoftaar, T.M.; Szabo, R.M.: ARIMA Models of Software System Quality.Proceedings of the Annual Oregon Workshop on Software Metrics, April 10-12, 1994, Silver Falls, Oregon
  73. Lasalle, H.: A Structural Approach to Improvement the Testing Process. in Kelly, M.: Management and Measurement of Software Quality, UNICOM SEMINARS, Middlesex, UK, 1993, pp. 121-130
  74. Leszak, M.; Brunck, W.; Mößler, G.: Analysis of Software Defects in a Large Evolutionary Telecommunication Systems. Proc. of the 12th International Workshop on Software Measurement, October 7-9, 2002, Magdeburg, Shaker Publ., Aachen, pp. 268-290
  75. Lewis, W. E.: Software Testing and Continuous Quality Improvement. Auerbach Publ./ CRC Press, 2000
  76. Liang, D.; Harrold, M. J.: Equivalence Analysis and its Application in Improving the Efficiency of Program Slicing. ACM Transactions on Software Engineering and Methodology, 11(2002)3, pp. 347-383
  77. Liggesmeyer, P.: A set of complexity metrics for guiding the software test process. Software Quality Journal, 4(1995), pp. 257-273
  78. Liggesmeyer, P.: Software-Qualität. Testen, Analysieren und Verifizieren von Software. Spektrum Akademischer Verlag, Berlin, Germany, 2002
  79. Malevris, N.; Yates, D.F.; Veevers A.: Pedictive metric for likely feasibility of program paths. Information and Software Technology, 32(1990)2, pp. 115-118
  80. May, J.H.R.; Lunn, A.D.: A Model of Code Sharing for Estimating Software Failure on Demand Probabilities. IEEE Transactions on Software Engineering, 21(1995)9, pp. 747-753
  81. McAllister, M.; Vuong, S.T.; Alilivic-Curgus, J.: Automated Test Case Selection Based on Test Coverage Metrics. Proceedings of the INPTS`92, Montreal, Sept/Oct 1992, pp. 63-76
  82. McColl, R.B.; McKim, J.C.: Evaluating and Extending NPath as a Software Complexity Measure. The Journal of Systems and Software, 7(1992) pp. 275-279
  83. Meng, L.; Lu, M.; Huang, B.; Xu, X.: Using Relative Complexity Measurement Which from Complex Network Method to Allocate Resources in Complex Software System's Gray-Box Testing. International Symposium on Computer Science and Society (ISCCS), 2011, pp. 189-192
  84. Mills, K.L.: An Experimental Evaluation of Specification Techniques for Improving Functional Testing. The Journal of Systems and Software, 32(1996)1, pp. 83-95
  85. Morgan, J.A.; Knafl, G.J.; Wong, W.E.: Predicting Fault Detection Effectiveness. Proc. of the Fourth METRICS'97, Albuquerque, Nov. 5-7, 1997, pp. 82-89
  86. Muellerburg, M.: Fundamental Concepts of Software Testing. GMD Research Report, Birlinghoven, March 1991
  87. Muellerburg, M.: Software Testing: A Stepwise Process. Proceedings of the Second European Conference on Software Quality Assurance, Oslo, 1990
  88. Munson, J.C.; Hall, G.A.: Software Measurement Based Statistical Testing.Proc. of the AOWSM, June 5-7, 1995, Oregon, Section 8
  89. Nejmeh, B.A.: NPATH: A Measure of Execution Path Complexity and Its Applications. Comm. of the ACM, 31(1988)2, pp. 188-200
  90. Obara, E. et al.: Metrics and Analysis in the Test Phase of Large-Scale Software. The Journal of Systems and Software, 38(1997)1, pp. 37-46
  91. Neumann, R.: Serviceorientiertes Testen mit Hilfe von virtualisierten Lasttreibern aus der Cloud. In: Schmietendorf, A.: BSOA/BCloud 2013, Shaker-Verlag, Aachen, 2013, S. 65-66
  92. Neumann, R.; Dumke, R.; Schmietendorf, A.: Enterprise Mashups - Usefulness and Relevance Put to the Test. In: H.R. Arabnia; H. Reza; L. Deligiannidis (Associate Editors: J.J. Cuadrado-Gallego; V. Schmidt; A.M.G. Solo): Proceedings of the 2010 International Conference on Software Engineering Research & Practice, WORLDCOMP 2010 (SERP 2010), Volume I, July 12-15, 2010, Las Vegas Nevada, USA, CSREA Press, S. 226-232
  93. Offutt, A.J.; Pan, J.; Tewary, K.; Zhang, T.: An Experimental Evaluation of Data Flow and Mutation Testing. Software - Practice and Experience, 26(1996)2, pp. 165-176
  94. Olsson, T.; Bauer, N.; Runeson, P.; Bratthall, L.: An Experiment on Lead-Time Impact in Testing of Distributed Real-Time Systems. Pro. of the Seventh International Software Metrics Symposium METRICS 2001, April 2001, London, pp. 159-168
  95. Orso, A.; Apiwattanapong, T.; Harrold, M. J.: Leveraging Field Data for Impact Analysis and Regression Testing. Software Engineering Notes, 28(2003)5, pp. 128-137
  96. Paton, K.: Should you test the code before you test the program? Proc. of the ESCOM 2001, April 2001, London, pp. 377-386
  97. Pilz, S.; Foltin, E.: Foundations to the development of test metrics for local area networks (German). Study, TU Magdeburg, July 1992
  98. Piwowarski, P.; Ohba, M.; Caruso, J.: Coverage Measurement Experience During Function Test. Proceedings of the 15th International Conference on Software Engineering, May 17-21, Baltimore, 1993, pp. 287-301
  99. Pol, M.: Measuring test process improvement. Proc. of the FESMA'99, Amsterdam, Netherlands, October 1999, pp. 23-27
  100. Reid, S.C.: An Empirical Analysis of Equivalence Partitioning, Boundary Value Analysis and Random Testing. Proc. of the Fourth METRICS'97, Albuquerque, Nov. 5-7, 1997, pp. 64-73
  101. Royer, T.C.: Software Testing Management - Life on the Critical Path.Prentice-Hall Inc., 1993
  102. Schaefer, H.: Inspection Handbook for Computer Program Development Projects. Report No. 84 01 41-9, Oslo, July 1984
  103. Schaefer, H.: Organization and managing of the software testing (German). Tutorium-Heft, Schweizerische Arbeitsgemeinschaft fuer Qualitaetsfoerderung, Zuerich, Switzerland, September 1991
  104. Seddio, C.: Integrating Test Metrics within a Software Enginnering Measurement Program at Eastman Kodak Company: A Follow-up Case Study. The Journal of Systems and Software, 20(1993)3, pp. 227-235
  105. Shimarov, V.A.: Definition and quantitative evaluation of test criteria. Proc. of the Fourth European Conference on Software Quality, October 17-20, Basel, Switzerland, pp. 350-360
  106. Shyamala. M.: Empirycally Evaluating Software Testing Processes Using the Test Process Evaluation Framework. Master Thesis, Universuty of Magdeburg, Germany, 2009
  107. Singh, Y.; Saha, A.: Prediction of testability using the design metrics for object-oriented software. International Journal of Computer Applications in Technology, 44(2012)1, pp. 12-22
  108. Sneed, H. M.: Test metrics. Metrics News, Journal of GI-Interest Group on Software Metrics,2007, 12(1):41–51
  109. Sneed, H. and S. Jungmayr.: Product and Process Metrics for the Software Test. InformatikSpektrum, Band 29, Nr. 1, Feb. 2006, pg. 23 (2006)
  110. Spillner, A.: Test Coverage Metrics for the Integrations Testing. (German)in: Dumke/Zuse: Theorie und Praxis der Softwaremessung, Deutscher Universitaetsverlag, Wiesbaden, 1994, pp. 59-77
  111. Spillner, A.: Test criteria and coverage measure for software integrated testing. Software Quality Journal, 4(1995), pp. 275-286
  112. Staknis, M.E.: Software quality assurance through prototyping and automated testing. Information and Software Technology, 32(1990)1, pp. 26-33
  113. Sy, N. T.; Deville, Y.: Consistency Techniques for Interprocedural Test Data Generation. Software Engineering Notes, 28(2003)5, pp. 108-127
  114. Tackett, B.D.; Doren, B.V.: Process Control for Error-free Software: A Software Success Story. IEEE Software, May/June 1999, pp. 24-29
  115. Tai, K.: Predicate-Based Test Generation for Computer Programs. Proceedings of the 15th International Conference on Software Engineering, May 17-21, Baltimore, 1993, pp. 267-276
  116. Traon, Y. L.; Robach, C.: Testability Measurements for Data Flow Designs.Proc. of the Fourth METRICS'97, Albuquerque, Nov. 5-7, 1997, pp. 91-98
  117. Verma, S., Ramineni, K., and Harris, I. G.: An efficient control oriented coverage metric. In ASP-DAC ’05: Proceedings of the 2005 conference on Asia South Pacific design automation, pages 317–322, New York, NY, USA. ACM Press
  118. Voas, J.M.; Miller, K.W.: Semantic Metrics for Software Testability. The Journal of Systems and Software, 20(1993)3, pp. 207-216
  119. Weyuker, E.J.: Can We Measure Sofwtare Testing Effectiveness? Proceedings of the First International Software Metrics Symposium, Baltimore, May 21-22, 1993, pp. 100-107
  120. Woodward, M.R.; Hedley, M.A.; Hennell, M.A.: Experience with Path Analysis and Testing of Programs. IEEE Transactions on Software Engineering, 6(1980)3, pp. 278-286
  121. Xu, P.; Wang, Y.; Shen, Z.: Application of software testability measurement model SPM to software testing. 8th International Conference on Reliability, Maintainability and Safety, 2009. ICRMS 2009. pp. 733-737
  122. Xu, L.; Xu, B.; Nie, C.; Chen, H.; Yang, H.: A Browser Compatibility Testing Method Based on Combinatorial Testing. Proc. of the International Conference non Web Engineering (ICWE 2003), Oviedo, Spain, July 2003, pp. 310-313
  123. Yeh, P.;Lin, J.: Software Testability Measurements Derived from Data Flow Analysis. Proc. of the CSMR'98, March 8-11, 1998, Florence, Italy, pp. 96-102
  124. Zhang, Y.: IEEE Software, September/October 2004, pp. 80-86
  125. Zhu, H.; Hall, P.A.V.: Test data adequacy measurement. Software Engineering Journal, January 1993, pp. 21-29