Please follow in details the guidelines for article critiques in the attached file below.
Discussion Preparation Software Quality 1- Read and write a review for the following paper posted below under Papers folder “A Framework for the Measurement of Software Quality”, and answer the following questions in your review: a. Why the assessment of software quality is difficult? b. How this framework impacts software quality assurance activities? Please follow the Guidelines for Article Critiques Posted below. Your submission must not exceed 2- pages. Please avoid using same wording from the research paper. Guidelines for article critiques.pdf 2-A FRAMEWORK FOR THE MEASUREMENT copy.pdf A FRAMEWORK FOR THE MEASUREMENT OF SOFTWARE QUALITY Joseph P. Cavano Rome Air Development Center James A. McCall General Electric Company ABSTRACT Research in software metrics incorporated in a framework establ ished for software qua l i t y meas- urement can po ten t ia l l y provide s i gn i f i can t bene- f i t s to software qua l i t y assurance programs. The research described has been conducted by General E lec t r i c Company for the A i r Force Systems Com- mand Rome A i r Development Center. The problems encountered def ining software qua l i t y and the approach taken to establ ish a framework for the measurement of software qua l i t y are described in this paper. INTRODUCTION We are a l l aware of the c r i t i c a l problems encoun- tered in the development of software systems: the estimated costs for development and operation are overrun; the de l iver ies are delayed; and the sys- tems, once del ivered, do not perform adequately. Software, as such, continues to be a c r i t i c a l element in most large-scale systems because of i t s cost and the c r i t i c a l functions i t performs. Many of the excessive costs and performance inadequa- cies can be a t t r ibu ted to the fact that "software systems possess many qua l i t i es or a t t r ibu tes that are jus t as c r i t i c a l to the user as the function they perform" (Ref I ) . For th is reason, con- siderable emphasis in the research community has been directed at the software qua l i t y area. The Ai r Force, as well as the rest of DOD and industry, is constantly s t r i v i ng to improve the qua l i t y of i t s computer-based systems. Producing high qua l i t y software is a prerequis i te for sa t is fy ing the s t r ingent r e l i a b i l i t y and e r ro r - free requirements of command and control software. Increasingly t i gh t budgets necessitate get t ing the highest qua l i t y software products at the best possible cost. A major d i f f i c u l t y in dealing with software, however, is that there are no quant i ta t ive measures of the qua l i t y of a so f t - ware product. This affects the military Command- Control-Communications-Intelligence (Cml) envi- ronment where the requirements for software quality far exceed the demands of the con~nercial world. The basic resources available for accom- plishing each military mission are often speci- fied by agencies external to the responsible organization ( i .e. , funding by Congress and tech- nology by the laboratories). Thus, the organiza- tion must optimize its performance within a l imi ted set of resources. For the development of a software system, th is opt imizat ion revolves around producing software that f u l f i l l s the mis- sion requirements. In order to know that th is has been done successful ly, the software development should be per iod ica l l y measured in a quant i ta t ive fashion to determine whether the f ina l system w i l l be capable of meeting i t s object ives. One problem in making this determination is the absence of a widely accepted definition of soft- ware quality. This leads to confusion when trying to specify quality goals for software. A limited understanding of the relationships among the fac- tors that comprise software quality is a further drawback to making quality specifications for software. A second current problem in producing high quality software is that only at delivery and into opera- tions and maintenance is one able to determine how good the software system is. At this time, modi- fications or enhancements are very expensive. The user is usually forced to accept systems that can- not perform the mission adequately because of funding, contractual, or schedule constraints. Since software testing alone does not produce or ensure good software -- i t only gives an indica- tion of error frequency that can be expected -- and since verification only shows correspondence to functional requirements, a new process is needed to measure and represent the qualities of a software system. This process should indicate which software characteristics relate directly to mission requirements and serve to define a vari- ety of quality factors: maintainability, rel iabi l- i ty, f lex ib i l i ty , correctness, testabil ity, port- abi l i ty, reusability, efficiency, usability, integrity, and interoperability. The process of software quality measurement may become a new function within the domain of quality assurance. The quantification of these measurements can be compared to mission requirements to determine i f those requirements are being met. The quality measurement process must be able to be applied during the requirements and design phases of software production; this key aspect further distinguishes i t from the testing and verification activities. The quality measurements are predic- tive in nature and oriented toward the development phases rather than toward the finished system. Early measurement will give an indication of how 133 well the sof~vare product wi l l operate in relation to the quali t l requirements levied on i t . In other words, an in i t ia l assessment wil l be made of the quality of the software system. By obtaining such an assessment before testing or final deliv- ery, faults or inadequacies can be identified and corrected early enough in the development process to result in large cost savings. The framework for the measurement of software qualitywas established to be useful at two dif- ferent levels of application: management and quality assurance. At the management level, the software quality factors are user-oriented and can be directed toward meeting the objectives of the system. At the quality assurance level, software-oriented metrics attempt to objectively measure specific elements at both the module and the system level and relate these to the software quality objectives. This paper is concerned mostly with the latter function. QUALITY AS A RELATIVE MEASURE The determination of "quality" is a key factor in everyday events -- wine-tasting contests, sporting events, beauty contests, etc. In these situations, quality is judged in the most fundamental and direct manner: side by side comparison of objects under identical conditions and with predetermined concepts. Time wine may be judged according to clarity of color, bouquet, taste, etc. However, this type of judgment is very subjective; to have any value at al l , i t must be made by an expert. Subjectivity and specialization also apply to determining software quality. To help solve this problem, a more precise definition of software quality is needed as well as a way to derive quantitative measurements of software for objec- tive analysis. A major question at this point is whether software can be measured at al l . A number of studies indicate that the answer to this question is yes (Refs 2, 3), but i t is a qualified yes. Since there is no such thing as absolute knowledge, one should not expect to measure software quality exactly, for every measurement must be partial ly imperfect. Jacob Bronowski described this paradox of knowedge in this way: "Year by year we devise more precise instruments with which to observe nature with more fineness. And when we look at the observations, we are discomfited to see that they are s t i l l fuzzy, and we feel that they are as uncertain as ever. We seem to be running after a goal which lurches away from us to in f in i ty every time we come within sight of i t . " (Ref 4). Consequently, any measurement of software must be somewhat imprecise. This promotes areas of uncer- tainty surrounding the meaurement, so a confidence level must be established to allow for tolerance in software measurement. The real goal of software measurement lies in determining what this area of tolerance might be and how i t might affect the use of the measurement. For instance, i f precise results are unattainable, does one s t i l l wish to expend energy and money to make these measurements? The answer to this is not always clear, but for some applications even a sl ight indication is better than no indication. Or as Reichenbach states: "Every act of planning requires some knowledge of the future and i f we have no perfectly certain knowledge, we are wil- ling to use probable knowledge in i ts place" (Ref 5). DIFFICULTY IN