When the Challenger exploded, Richard Feynman wrote a 12-page appendix attached to the incident report (The appendix is also attached here). Read the appendix attached in its entirety and answer the following questions:In point form (3-4 points), summarize Feynman’s concerns about the design and testing of the space shuttle main engines.b.Feynman was generally impressed by the reliability of the Shuttle’s avionics. State 4 factors that were responsible for this high reliabilityIn point form (3-4 points), summarize Feynman’s concerns about the design and testing of the space shuttle main engines.b.Feynman was generally impressed by the reliability of the Shuttle’s avionics. State 4 factors that were responsible for this high reliability
a) In point form (3-4 points), summarize Feynman’s concerns about the design and testing of the space shuttle main engines
b)Feynman was generally impressed by the reliability of the Shuttle’s avionics. State 4 factors that were responsible for this high reliability-
4 points), summarize Feynman’s concerns about the design and testing of the space shuttle main engine
Appendix F: Personal Observations on the Reliability of the Shuttle Appendix F: Personal Observations on the Reliability of the Shu琀�le Richard P. Feynman June 9, 1986 1. Introduction It appears that there are enormous differences of opinion as to the prob-ability of a failure with loss of vehicle and of human life. 吀�e estimates range from roughly 1 in 100 to 1 in 100,000. 吀�e higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shu琀�le up each day for 300 years expecting to lose only one, we could properly ask “What is the cause of management’s fantastic faith in the machinery?” We have also found that certification criteria used in Flight Readiness Reviews o昀�en develop a gradually decreasing strictness. 吀�e argument that the same risk was flown before without failure is o昀�en accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious a琀�empt to remedy them, or to delay a flight because of their continued presence. 吀�ere are several sources of information. 吀�ere are published criteria for certification, including a history of modifications in the form of waivers and deviations. In addition, the records of the Flight Readiness Reviews for each flight document the arguments used to accept the risks of the flight. Information was obtained from the direct testimony and the reports of the range safety officer, Louis J. Ullian, with respect to the history of success of solid fuel rockets. 吀�ere was a further study by him (as chairman of the launch abort safety panel (LASP)) in an a琀�empt to determine the risks involved in possible accidents leading to radioactive contamination from a琀�empting to fly a plutonium power supply (RTG) for future planetary missions. 吀�e NASA study of the same question is also available. For the History of the Originally published in June 1986 as Appendix F to the Rogers Commission Report, in response to the explosion of the Space Shu琀�le Challenger in January of that year. 1 2 R. P. Feynman Space Shu琀�le Main Engines, interviews with management and engineers at Marshall, and informal interviews with engineers at Rocketdyne, were made. An independent (Cal Tech) mechanical engineer who consulted for NASA about engines was also interviewed informally. A visit to Johnson was made to gather information on the reliability of the avionics (computers, sensors, and effectors). Finally there is a report “A Review of Certification Practices, Potentially Applicable to Man-rated Reusable Rocket Engines,” prepared at the Jet Propulsion Laboratory by N. Moore, et al., in February, 1986, for NASA Headquarters, Office of Space Flight. It deals with the methods used by the FAA and the military to certify their gas turbine and rocket engines. 吀�ese authors were also interviewed informally. 2. Solid Rockets (SRB) An estimate of the reliability of solid rockets was made by the range safety officer, by studying the experience of all previous rocket flights. Out of a total of nearly 2,900 flights, 121 failed (1 in 25). 吀�is includes, however, what may be called, early errors, rockets flown for the first few times in which design errors are discovered and fixed. A more reasonable figure for the mature rockets might be 1 in 50. With special care in the selection of parts and in inspection, a figure of below 1 in 100 might be achieved but 1 in 1,000 is probably not a琀�ainable with today’s technology. (Since there are two rockets on the Shu琀�le, these rocket failure rates must be doubled to get Shu琀�le failure rates from Solid Rocket Booster failure.) NASA officials argue that the figure is much lower. 吀�ey point out that these figures are for unmanned rockets but since the Shu琀�le is a manned vehicle “the probability of mission success is necessarily very close to 1.0.” It is not very clear what this phrase means. Does it mean it is close to 1 or that it ought to be close to 1? 吀�ey go on to explain “Historically this extremely high degree of mission success has given rise to a difference in philosophy between manned space flight programs and unmanned programs; i.e., numer- ical probability usage versus engineering judgment.” (吀�ese quotations are from “Space Shu琀�le Data for Planetary Mission RTG Safety Analysis,” Pages 3-1, 3-2, February 15, 1985, NASA, JSC.) It is true that if the probability of failure was as low as 1 in 100,000 it would take an inordinate number of tests to determine it (you would get nothing but a string of perfect flights from which no precise figure, other than that the probability is likely less than the number of such flights in the string so far). But, if the real probability is not so small, flights would show troubles, near failures, and possible actual failures with a reasonable number of trials. and standard statistical methods could give a reasonable estimate. In fact, previous NASA experience had shown, on occasion, just such difficulties, near accidents, and accidents, all giving warning that the probability of flight failure was not so very small. 吀�e inconsistency of the argument not to determine reliability through historical Personal Observations on the Reliability of the Shuttle 3 experience, as the range safety officer did, is that NASA also appeals to history, beginning “Historically this high degree of mission success…” Finally, if we are to replace standard numerical probability usage with engineering judgment, why do we find such an enormous disparity between the management estimate and the judgment of the engineers? It would appear that, for whatever purpose, be it for internal or external consumption, the management of NASA exaggerates the reliability of its product, to the point of fantasy. 吀�e history of the certification and Flight Readiness Reviews will not be repeated here. (See other part of Commission reports.) 吀�e phenomenon of accepting for flight, seals that had shown erosion and blow-by in previous flights, is very clear. 吀�e Challenger flight is an excellent example. 吀�ere are several references to flights that had gone before. 吀�e acceptance and success of these flights is taken as evidence of safety. But erosion and blow-by are not what the design expected. 吀�ey are warnings that something is wrong. 吀�e equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. 吀�e fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. When playing Russian roule琀�e the fact that the first shot got off safely is li琀�le comfort for the next. 吀�e origin and consequences of the erosion and blow-by were not understood. 吀�ey did not occur equally on all flights and all joints; sometimes more, and sometimes less. Why not sometime, when whatever conditions determined it were right, still more leading to catastrophe? In spite of these variations from case to case, officials behaved as if they understood it, giving apparently logical arguments to each other o昀�en de- pending on the “success” of previous flights. For example. in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cu琀�ing the ring that cu琀�ing it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was “a safety factor of three.” 吀�is is a strange use of the engineer’s term, “safety factor.” If a bridge is built to with- stand a certain load without the beams permanently deforming, cracking, or breaking, it may be designed for the materials used to actually stand up under three times the load. 吀�is “safety factor” is to allow for uncertain excesses of load, or unknown extra loads, or weaknesses in the material that might have unexpected flaws, etc. If now the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. 吀�ere was no safety factor at all; even though the bridge did not actually collapse because the crack went only one-third of the way through the beam. 吀�e O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be 4 R. P. Feynman inferred. 吀�ere was no way, without full understanding, that one could have confi- dence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. 吀�is was a model based not on physical understanding but on empirical curve fi琀�ing. To be more detailed, it was supposed a stream of hot gas impinged on the O-ring material, and the heat was determined at the point of stagnation (so far, with reasonable physical, thermodynamic laws). But to determine how much rubber eroded it was assumed this depended only on this heat by a formula suggested by data on a similar material. A logarithmic plot suggested a straight line, so it was supposed that the erosion varied as the .58 power of the heat, the .58 being determined by a nearest fit. At any rate, adjusting some other numbers, it was determined that the model agreed with the erosion (to depth of one-third the radius of the ring). 吀�ere is nothing much so wrong with this as believing the answer! Uncertainties appear everywhere. How strong the gas stream might be was unpredictable, it depended on holes formed in the pu琀�y. Blow-by showed that the ring might fail even though not, or only partially eroded through. 吀�e empirical formula was known to be uncertain, for it did not go directly through the very data points by which it was determined. 吀�ere were a cloud of points some twice above, and some twice below the fi琀�ed curve, so erosions twice predicted were reasonable from that cause alone. Similar uncertainties surrounded the other constants in the formula, etc., etc. When using a mathematical model careful a琀�ention must be given to uncertainties in the model. 3. Liqid Fuel Engine (SSME) During the flight of 51-L the three Space Shu琀�le Main Engines all worked perfectly, even, at the last moment, beginning to shut down the engines as the fuel supply began to fail