Examining Program Evaluation Models

 

BASED ON WHAT YOU LEARNED ABOUT THE VARIOUS MODELS OF PROGRAM EVALUATION, IS THERE ONE MODEL THAT YOU WOULD ARGUE IS MORE EFFECTIVE, PRACTICAL, OR OTHERWISE PREFERABLE TO THE OTHERS? WHY?

Yes, No and Maybe. The typical lawyer answer” (F. Elliot Goldman, almost every day of his working life).

First, the approaches we learned about this week are the result of professional theories developed by individuals working, researching, and living within their field. Second, theories are complicated, developed by people with different cognitive, emotional, and professional education, training, and experiences. Third, what may work in one situation may not work in another.

I have an example, but let me apologize in advance if it is not universally applicable (rarely is any example universal). Many people have children. Many have more than one child. Prior to having children, and similar to those who never do, there are attitudes and expectations of what life will be like upon having children. These very expectations and attitudes often change dramatically from before to after. Ironically, there is also a significant attitude and expectation change between the first child and the second. Our lives are turned upside down upon the birth of the first child – becoming a parent. We acclimate and adapt. We believe we NOW understand what having a child is REALLY all about. Imagine our surprise when we have another child. Once again, we are stunned at the radical difference going from one child to two has on their lives. Further, even more surprising considering the inherent acceptance of each person’s right to individuality (uniqueness) in our society, we are bewildered when our second or third children do not respond to the same parenting techniques the first or second child did. This, in spite of the fact in all likelihood, the newbie parents have been warned by other more seasoned parents and/or the grandparents. What? Your brother/sister loved when I rocked them to sleep. What? Your brother/sister never cried in the car. What? Your brother/sister … you can fill in the blank.

The historical and contemporary approaches detailed this week reminded me a great deal of parenting. What works in one situation with one child, does not always apply to another situation or another child. It really is a matter of context and individuality (uniqueness). Programs are unique created within organizations with their own culture, different and unique people with varied personalities, experiences, and perceptions. Although I cannot agree with Flexner’s argument that “common sense was perhaps the most relevant form of expertise” (Fitzpatrick, Sanders & Worthen, 2011, p. 129), it does make complete sense that evaluative approaches would necessarily need to be adapted and/or modified.

DO YOU THINK SOME MODELS COULD BE MORE EFFECTIVE FOR ONE PROGRAM EVALUATION BUT LESS EFFECTIVE FOR ANOTHER?

Absolutely! There were a numerous examples in our text this week of approaches that are more appropriate for some situations and/or programs than others are. For instance, I would be completely surprised if Scriven’s consumer-oriented approach did NOT resonate with all of us, especially when applied to specific product evaluation (Fitzpatrick, et al., 2011). Who among us has not asked, “How good is this product?” (Fitzpatrick, et al., 2011, p. 145) before purchasing a new computer, software, gaming station, car, or other expensive gadget. As a consumer, I admit, I find it oddly comforting to read detailed evaluations consisting of the explicit criteria and standards for product comparisons. I pour over comparisons for software, computers, automobiles, cell phones, and even toys. However, this type of evaluation would not be nearly as effective in application to non-profit programs such as rape crisis hotline program, domestic violence prevention program, homeless shelters or the SPCA, for at least a couple of reasons. One of the benefits of Scriven’s consumer-oriented approach is that it is comparing large quantities of apples to apples (i.e., HP Computers to Dell Computers). As consumers we do not have the time or want to spend the time to review the specifications of each company’s product to make the detailed comparisons ourselves. There are just too many to compare. On the other hand, non-profit programs such as a rape crisis hotline or domestic violence prevention program are not competing for consumers in the same sense. Typically, these organizations are limited in supply, despite need and/or demand.

In addition, there are some approaches that are clearly intended to work more towards program improvement (formative) than providing solely an outcome based analysis (summative). The intended purpose of the evaluation is paramount in planning which evaluative approach to utilize. “P-PE is designed to improve use. Its primary purpose is practical, not political as in T-PE. Cousins and Earl’s approach is not designed to empower participants or to change power distributions. It is, however, designed to encourage organizational learning and change. Although the immediate goal is practical, increasing the usefulness and actual use of the current evaluation, the long-term goal is capacity building … acknowledge that the approach would not be particularly useful for summative evaluations” (Fitzpatrick et al., 2011, p. 207).

IS IT PERHAPS MORE ADVANTAGEOUS TO SYNTHESIZE VARIOUS APPROACHES INTO ONE? IF YOU WERE TO SYNTHESIZE MORE THAN ONE MODEL FOR A SINGLE EVALUATION, WHAT WOULD THEY BE AND HOW WOULD YOU GO ABOUT IT? WHY WOULD YOU CHOOSE THAT PARTICULAR MIXED METHOD?

Again, I would have to defer to the typical lawyer answer: Yes, No, It depends. There are some approaches that would work well together, assuming they have enough fundamentally in common. For instance, you can mix/synthesize parts of approaches to include summative AND formative elements. This could enhance the program’s success in the long-run, rather than providing simple yes or no regarding funding for the future.

Another example of cooperative mixed approaches would be the use of the expertise-approach such as the Connoisseur approach with a more non-biased or objective approach. Working in concert, two interdependent evaluators could collaborate together to produce a wholly more comprehensive evaluative product than either could alone. Actually, in reflection, I often read reviews of other laypersons (non-connoisseurs) in addition to reading consumer comparisons. I do not rely heavily on their ratings, but instead rely on what the types of information provided. For instance, when deciding on what type of computer to purchase, I read numerous reviews focusing on the types of problems people experienced. Were they end-user issues? Did the people misunderstand the specifications of the system prior to purchase? Was the customer service lacking? The information provided by other users was very helpful for me. I was able to identify the primary reasons people criticized the computer system and determined most of the issues had to do with misunderstanding the computer environment (end-user issues). It gave me confidence in my decision.

There are other types of evaluation that would NOT mix well. For instance, you could NOT mix naturalistic evaluation with objectives-based evaluation essentially because their value systems are completely different. The objectives-based approach, similar to behaviorism, is focused on simplistic, performance based measures. The naturalistic evaluation, on the other hand, emerges from the constructivist paradigm… “The authenticity of a study … by its fairness (the extent to which it represented different views and value systems associated with the subject of study), and by its ability to raise stakeholders’ awareness of issues, to educate them to the views of other stakeholders, and to help them move to action” (Fitzpatrick, et al., 2011, p. 197).

Another prime example of non-mixable approaches would be the Preordinate evaluation approach versus the Responsive evaluation approach. Stake provides a detailed comparison highlighting major distinctions between the two approaches … “the greater amount of time spent by the preordinate evaluator in preparing or developing instruments and in analyzing data… in contrast, the dominant activity of the responsible evaluator should be observing the program, learning much more about what is actually going on in the program” (Fitzpatrick, et al., 2011, p. 195).

There was so much information to reflect on this week, it is difficult to restrain myself from becoming overcomplicated and overwhelming in my post. I can understand why we are discussing the different approaches in addition to the course project table enabling us to have more than one opportunity to synthesize the information.

I am looking forward to reading the other posts this week. I did not even get a chance to discuss the systems based approaches or the ISD approach in relation to e-portfolio evaluation (Reardon & Hartley, 2007).

References

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th Ed.). Upper Saddle River, NJ: Pearson Education, Inc.

Reardon, R., & Hartley, S. (2007). Program evaluation of e-portfolios. New Directions for Student Services, 119, 83-97. Retrieved from EBSCOhost.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s