While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes. Surveillance is the continuous monitoring or routine data collection on various factors e. Surveillance systems have existing resources and infrastructure.
Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes. There are limits, however, to how useful surveillance data can be for evaluators. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation. In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously.
Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. Evaluators can also use qualitative methods e. Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model.
Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting. Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.
Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved. While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.
Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.
Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries. Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes. The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners.
Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.
You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.
The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand?
Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community?
Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? Sometimes the standards broaden your exploration of choices. What year did you attend the program? Payments department staff Registration staff Instructors Cleanliness staff How did you hear about our program? Very unsatisfied Unsatisfied Neutral Satisfied Very satisfied On a scale of 1 to 5, how challenging was the program?
Do you think the duration of the program was good enough as per your expectation? Yes No Rather not say In your opinion, was the program schedule flexible?
Yes No Rather not say Was the objective of the program explained clearly before registration? Yes No Rather not say Was your need satisfied after the completion of the program?
Yes No Rather not say Please state three things that benefitted you the most from the program? Please state three things that you felt were unnecessary in the program? Did the program provide you with a good proportion of theoretical and practical learning? Very easy Moderately easy Neither easy nor difficult Moderately difficult Very difficult How often were you evaluated on the understanding of the program?
Very often Sometimes Rarely Never On a scale of 1 to 5, how would you rate the evaluation methods? Please state your level of agreement for the following? The skill level of other participants was similar to yours The instructors were very knowledgable about the topic they were teaching Including interactive session in the program was a good choice The course material was easy to understand The registration process for the program was very smooth.
Would you be interested in enrolling in another program with us? The instructor grades consistently with the evaluation criteria. The course environment felt like a welcoming place to express my ideas. I attend class regularly. I consistently prepared for class. I have put a great deal of effort into advancing my learning in this course. In this course, I have been challenged to learn more than I expected.
This class has increased my interest in this field of study. This course gave me confidence to do more advanced work in the subject. I believe that what I am being asked to learn in this course is important. The readings were appropriate to the goals of the course. The written assignments contributed to my knowledge of the course material and understanding of the subject. Expectations for student learning were clearly defined. Student learning was fairly assessed e.
The grading practices were clearly defined. The grading practices were fair. This course was challenging. This course made me think. What grade do you expect to earn in this course? This course helped me develop intellectual skills e. This course helped me develop professional skills e. This course enhanced my sense of social responsibility. I would highly recommend this instructor to other students. I would recommend this instructor to others. Overall, this instructor met my expectations for the quality of a UW-Madison teacher.
I would highly recommend this course to other students. I would recommend this course to others. Overall, this course met my expectations for the quality of a UW-Madison course. This course had high educational impact. This course was useful in progress toward my degree. Do you have any specific recommendations for improving this course? Some sample questions are: Who is the target population? What services do they need? Are objectives met? If so, how? If not, why not? Are activities conducted with the target population?
Are there other populations the program should be working with? Is the target population adequately reached by and involved in activities? How does the target population interact with the program? What do they think of the services? Are they satisfied? Some sample questions are: What are the outputs, outcomes, objectives, and goals of the project? Are outcomes, objectives, and goals achieved?
Do they have negative effects? Sample questions are: Is the cost of the services or activities reasonable in relation to the benefits? Are there alternative approaches that could have the same outcomes with less cost?
0コメント