Discussion Questions1. Imagine yourself in the role of the Director of the Social and Manpower Planning Division. Create a table or list in which you assess the strengths and weaknesses of each...

1 answer below »



Discussion Questions



1. Imagine yourself in the role of the Director of the Social and Manpower Planning Division. Create a table or list in which you assess the strengths and weaknesses of each design. This table can be written in a technical language. Be sure to consider:



a. The scientific quality of the design, i.e. its ability to estimate the true impact of PATH on the key outcomes of interest.



b. The political feasibility of implementing the design



c. The logistical implications of the design, in terms of ensuring that findings from the evaluation are available in a timely manner for policymakers



d. The financial implications of the design, in particular if it involves more resources than those already budgeted.













2. Write a one-page single-spaced memo to the Minister of Labour and Social Security recommending which design should be selected to evaluate PATH. Justify your recommendation using the strengths and weaknesses you identified above, but write the memo in non-technical language. Attach your table of strengths and weaknesses to the memo.













*I have attached the lecture slides that coincide with the assignment. Please try and match the language and referenced quasi-experiment methods.



5008 EVIDENCE-BASED DECISION MAKING Session 12: Evidence from Experiment III: Quasi-Experiment Instructor: Shuyang Peng, PhD QUASI-EXPERIMENT qQuasi-experiment ØStudies of planned or intentional treatments that resemble randomized field experiments but lack full random assignment. ØThe term comparison group is often used in the context of quasi-experiments rather than control group, the term used in randomized experiments, to highlight the lack of random assignment. vHowever, researchers do not always obey this distinction, so still look closely at how the assignment was done. QUASI-EXPERIMENT qWhether a specific quasi-experiment will yield unbiased estimates of program effects depends largely on the extent to which the design minimizes critical differences between the intervention and comparison groups. qDifferent quasi-experimental techniques § Reflexive controls § Constructing comparison groups by matching § Regression-discontinuity designs § Difference in Difference REFLEXIVE CONTROL DESIGN REFLEXIV E CONTROL DESIGN • Reflexive controls • Two types of reflexive control design: • Simple Pre-Post Studies • Time-Series Designs REFLEXIVE CONTROL DESIGN Intervention Time Primary Outcome q Simple Pre-Post Studies Before After Time A Time A’ REFLEXIVE CONTROL DESIGN Intervention Time Primary Outcome q Time-series design §A series of observations on the outcome over time §The strongest reflexive control design. §If the treatment has a causal impact, the post- intervention series will have a different level or slope than the pre-intervention series REFLEXIVE CONTROL DESIGN q Time-series design The effects of charging for directory assistance in Cincinnati An interrupted time-series analysis of local directory-assistance calls in the Cincinnati area from 1962 to 1976 revealed a significant reduction in the daily frequency of calls after charges were introduced in 1974 (McSweeny, 1978) REFLEXIVE CONTROL DESIGN qTime-series design Ø The effect can be a change in slope Canada Sexual Assault Law Reform In 1983, Canada reformed its laws regarding rape and sexual assault to facilitate a greater reporting of sexual offenses and a more just approach to prosecution and victim rights (Schissel, 1999) REFLEXIVE CONTROL DESIGN qTime-series design The effect can be delayed in time. The effects of an alcohol warning label on prenatal drinking (Hankin et al., 1993) REFLEXIVE CONTROL DESIGN q Reflexive control Ø The estimation of program effects comes entirely from information on the targets at two or more points in time, at least one of which is before exposure to the program. Ø In a reflexive comparison, the counterfactual is constructed on the basis of the situation of program participants before the program. Thus, program participants are compared to themselves before and after the intervention and function as both treatment and comparison group. v This type of design is particularly useful in evaluations of full-coverage interventions such as nationwide policies and programs in which the entire population participates and there is no scope for a control group. Ø Presumption must be made: the targets have not changed on the outcome variable during the time between observations except for any change induced by the intervention. v A major drawback of such design: the situation of program participants before and after the intervention may change owing to myriad reasons independent of the program. MATCHING CONSTRUCTING CONTROL GROUP BY MATCHING qMatching ØFinding individuals/groups who are as close as possible to those in the treatment group, so they can be used to estimate the counterfactual. ØMatching can be done either at an individual or aggregated level. CONSTRUCTING CONTROL GROUP BY MATCHING qMatching at individual level, An Example: ØA stress reduction program at a university at the approach of final exam week. The program involves a 2-hour workshop run by a counselor, along with materials and exercises the students use later on their own. CONSTRUCTING CONTROL GROUP BY MATCHING qMatching at group level, An Example: Ø In the 1990s, the U.S. Department of Housing and Urban Development (HUD) implemented a grant program to encourage resident management of low-income public housing projects (see Van Ryzin, 1996). Ø Inspired by earlier projects, spontaneous efforts by residents who organized to improve life in troubled public housing projects, HUD implemented a program of grants and technical assistance to selected housing projects in 11 cities nationwide to establish resident management corporations (RMCs). Ø These nonprofit RMCs, controlled and staffed by residents, managed the housing projects and initiated activities aimed at long-standing community issues such as crime, vandalism, and unemployment. The RMC intends to improve building maintenance, security, and housing satisfaction. Ø Selected is the critical word—the HUD-funded projects were not just any housing projects but ones that thought themselves, or were judged by HUD, to be good candidates for the program. CONSTRUCTING CONTROL GROUP BY MATCHING In the HUD study, comparison buildings were matched to those in the treatment buildings in terms of location (city), architecture, and general demographic characteristics. CONSTRUCTING CONTROL GROUP BY MATCHING REGRESSION DISCONTINUITY DESIGN REGRESSION DISCONTINUITY DESIGN Implementing a developmental reading program to improve GPA of community college students (Napoli and Hiltner 1993) Developmental reading program GPA 40 50 60 70 80 90 100 1.7 1.9 2.1 2.3 2.5 2.7 2.9 3.1 3.3 cutoff Reading comprehension Test GPA Regression line Pretest REGRESSION DISCONTINUITY DESIGN 40 50 60 70 80 90 100 1.7 1.9 2.1 2.3 2.5 2.7 2.9 3.1 3.3 cutoff Reading comprehension Test GPA Regression line Pretest REGRESSION DISCONTINUITY DESIGN REGRESSION DISCONTINUITY DESIGN 40 50 60 70 80 90 100 1.7 1.9 2.1 2.3 2.5 2.7 2.9 3.1 3.3 cutoff Treatment group Comparison group Reading comprehension Test GPA REGRESSION DISCONTINUITY DESIGN 40 50 60 70 80 90 100 1.7 1.9 2.1 2.3 2.5 2.7 2.9 3.1 3.3 Treatment group regression line Comparison group regression line Discontinuity …in the regression lines … at the cutoff Reading comprehension Test GPA REGRESSION DISCONTINUITY DESIGN qThe effect of the program on an outcome could be determined by comparing the two groups on either side of the cut point. qThe RD design is a pretest-posttest program-comparison group strategy. qThe unique characteristic of RD designs: participants are assigned to program or comparison groups solely on the basis of a cutoff score on a pre-program measure. qThis cutoff criterion implies the major advantage of RD designs Ø They are appropriate when we wish to target a program or treatment to those who most need or deserve it. Ø Does not require us to assign potentially needy individuals to a no-program comparison group in order to evaluate the effectiveness of a program, more ethical compared to randomized experiment. DIFFERENCE IN DIFFERENCE DIFFERENCE IN DIFFERENCE qExample: three strikes law in CA. ØCalifornia’s three-strikes law is a sentencing scheme that gives defendants a prison sentence of 25 years to life if they are convicted of three or more violent or serious felonies. ØDoes the three strikes law reduce felony arrest rates? ØWhat about the influence of economy and other factors that could influence crime rate? vOne solution: Find another or other state(s) for comparison. DIFFERENCE IN DIFFERENCE The simplest set up Intervention Time Primary Outcome Time A Time B Treatment Comparison y11 y21 y12 y22 Average treatment effect = (y12-y11)-(y22-y21) DIFFERENCE IN DIFFERENCE qData varies by group (e.g., state), time (e.g., year) and observed outcome !"#. ØOnly two periods t= 1, 2 ØPolicy treatment/intervention will occur in one group of observations and not in the other group. ØThe average treatment effect (ATE) is the difference in differences. Pre(time=1) Post(time=2) Change Group 1 (Treated) $!! $!" Δ$# = $!" - $!! Group 2 (Control) $"! $"" Δ$$ = $"" - $"! DID DID =Δ$# − Δ$$ INTERNAL AND EXTERNAL VALIDITY VALIDITY IN EXPERIMENT CONTINUED qInternal validity § Causality § Random assignment helps to ensure statistical equivalent or comparable groups § Without random assignment, consider the extent to which the comparison group is comparable to the treatment group. qExternal validity § Generalizability - the extent to which the findings of a study can be projected, or generalized, to other people, situations, or time periods. THREATS TO INTERNAL VALIDITY qHistory: Did some unanticipated event occur while the experiment was in progress and did these events affect the dependent variable? § The economy, the weather, social trends, political crises—all sorts of events can happen, around the time of the treatment, and some of these events could also influence the outcome. § History is a threat for the one group design. § Is not a threat for the two-group design (treatment/experimental and comparison/control) if both groups are exposed to the effect of history. THREATS TO INTERNAL VALIDITY qMaturation: Were changes in the dependent variable due to normal developmental processes operating within the subject as a function of time? Ø Is a threat to for the one group design. Ø Is not a threat to the two group design, if participants in both groups change (“mature”)at same rate. THREATS TO INTERNAL VALIDITY qSelection: Refers to selecting participants for the various groups in the study. Are the groups equivalent at the beginning of the study? ØHow are groups are selected into experimental and comparison groups? This could bias the estimation of program effect. THREATS TO INTERNAL VALIDITY qTesting: Did the pre-test affect the scores on the post-test? ØA pre-test may sensitize participant in unanticipated ways and their performance on the post-test may be due to the pre-test, not to the treatment, or, more likely, and interaction of the pre-test and treatment. Ø Is a threat to the one group design. ØNot a threat to the two group design if both groups are exposed to the pre- test. THREATS TO INTERNAL VALIDITY qContamination qDemoralization and Rivalry qNoncompliance qAttrition EXTERNAL VALIDITY qExternal validity refers to the degree to which the results of an empirical investigation can be generalized to and across individuals, settings, and times. Kennedy School of Government Case Program CR14-09-1903.0 This case was written by Dan Levy, Harvard Kennedy School (HKS) Lecturer in Public Policy, and HKS students Michael McCreless and Daniel Bjorkegren. It is based on an evaluation conducted by Mathematica Policy Research (Dan Levy and Jim Ohls) for Jamaica’s Ministry of Labour and Social Security that was funded by The World Bank. The case was funded by Harvard Kennedy School’s Strengthen Learning and Teaching Excellence (SLATE) initiative. It does not necessarily reflect the views of any of these institutions. (0409) Copyright © 2009 by the President and Fellows of Harvard College. No part of this publication may be reproduced, revised, translated, stored in a retrieval system, used in a spreadsheet, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise) without the written permission of the Case Program. For orders and copyright permission information, please visit our website at www.ksgcase.harvard.edu or send a written request to Case Program, John F. Kennedy School of Government, Harvard University, 79 John F. Kennedy Street, Cambridge, MA 02138 Designing Impact Evaluations: Assessing Jamaica’s PATH Program Introduction It is March 2003, and the Government of Jamaica’s Ministry of Labour and Social Security is in the midst of a major reform of its social safety net. The World Bank helped finance this reform and has required the government to evaluate the cornerstone of the social safety net reform, PATH (Programme Advancement through Health and Education). The government has selected a firm to evaluate the program, and is now in discussions with this firm about how best to evaluate the impact of the program. The firm has presented three possible evaluation designs, and the Minister of Labour and Social Security has assigned you, as the Director of the Social and Manpower Planning Division, to select a design. Background on PATH In early 2000, the Government of Jamaica undertook a reform of its social safety net system, refocusing the system around PATH, a conditional cash transfer program. Through the conditional cash transfer program, eligible families received cash assistance conditional on regular attendance at school and regular checkups at health centers. This meant that once a family started receiving cash transfers, they would continue to receive them as long as they would continue to meet the program’s conditions. 1 1 The conditions for receiving benefits are as follows: Children 0-6 years old need to visit a health clinic every two months during the first year and twice a year thereafter. Children 7-17 years old need to attend school at least 85 percent of school days. Designing Impact Evaluations ________________________________________________ CR14-09-1903.0 2 The primary objective of the program was to link social assistance with human capital accumulation. Another aim was to improve the targeting of welfare benefits over previous social assistance programs in Jamaica. With nearly 20 percent of the Jamaican population under
Answered 1 days AfterApr 10, 2023

Answer To: Discussion Questions1. Imagine yourself in the role of the Director of the Social and Manpower...

Dr Shweta answered on Apr 12 2023
37 Votes
Ans 1. Table assessing the strengths and weaknesses of the three Design types.
    
    Design 1
    Design 2

    Design 3
    Scientific quality of the design
    Strength: The outcomes of the study will be accurate as the involved participants understand the key objectives very well
Weakness: Since the types of participants are very specific so generalization of data is limited
    Strength: Selections of participants is very good in this design. The outcomes of the study will be more generalized as it involves the eligible participants and compare them with the suitable comparator to assess the key objectives
    Weakness: Since this design involve random type of participants irrespective of their eligibility, the data collected for the desired outcome is also not so accurate and hence its use is limited
    Political feasibility of implementing the design
    Strength: Since the outcomes are good chances of political feasibility is high
Weakness: specific data affects it’s in general implementation
    Strength: Since the outcomes are more...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here