Ensuring the Validity of the Data Collection Process

 

SHOULD A DESIGN EVER BE MODIFIED IN ORDER TO WIN APPROVAL FROM INTERESTED PARTIES, ESPECIALLY THOSE PARTIES WHO CAN WITHDRAW THEIR FINANCE OR TERMINATE THE EVALUATION?

It is ironic that this particular question is included in a discussion regarding validity of data collection. There are several different questions within this poorly written question.

1. Should a design EVER be modified? Evaluations may need to be modified for any of a variety of reasons, including utilization, methodology, access to data, or simply due to the type of evaluation being conducted. The big hint here is the word “ever.” It is an all or nothing word. To take a position that a design would never be modified is inherently absurd.

2. Should a design EVER be modified to WIN APPROVAL from interested parties? There is that word “ever” again. In addition, the terms “win approval” negatively connotes a subordinate gaining acceptance from a superior. More appropriate phrasing would be, “improve acceptance” or “improve stakeholder buy-in.” Similarly, interested parties are referring to stakeholders. Stakeholders have a vested interest in the evaluation, formative or summative. Stakeholders also have valuable information regarding the program that can aid the evaluation.

Once the question is rephrased to: “Should an evaluation design be modified to improve stakeholder buy-in and/or participation?” It is much easier to determine the most appropriate answer. An evaluation design may need modification to improve stakeholder’ buy-in, depending on the type of evaluation being conducted: formative or summative. In a formative evaluation, the data collected is intended to be used to improve the program. Consulting and collaborating with the stakeholders is a necessary step to ensure data is collected that is useful to the stakeholders towards that end. In addition, the evaluation design may need to be modified for utilization by the interested parties (aka stakeholders) depending on the evaluation approach being used: program oriented, decision-oriented, participant-oriented, expertise-oriented or consumer-oriented (Fitzpatrick, Sanders, & Worthen, 2011). Further, “evaluators make use of many different data sources and methods. The selection of sources and methods is dependent on the nature of the evaluation question(s), the context of the program to be evaluated, and the nature of credible evidence for stakeholders and clients” (Fitzpatrick et al., 2011, p. 449).

3. Should a design EVER be modified to win the approval of those who can withdraw financing for the evaluation? The all or nothing term “ever” seals the deal. I am certain there is bound to be some modification to some evaluation plan suggested by some financier, at some point in time that would be considered reasonable and justified. That said, evaluations must adhere to the standards of the profession. Our text suggests that it is important to remember that it is unprofessional and pointless to modify the evaluative criteria to the point of unrecognizability solely for the sake of making it fit the data already collected in the hopes of saving some time and/or money (Fitzpatrick et al., 2011). In addition, the evaluation design is not a static black and white product developed in a vacuum. If there is a financier, then there is likely a contract for the evaluation stating the purpose of the evaluation. If the evaluator proceeds to develop an inappropriate formative evaluation design for a summative evaluation, then the evaluator is out of line and needs to redesign the plan.

4. Should a design EVER be modified to win the approval of those who can terminate the evaluation? Similar to my responses above, there is bound to be some situation in which an evaluation plan needs to be evaluated based on the suggestions of someone with the power to pull the plug. It is important to remember, there is a HUGE difference between winning the approval of those in charge and creating a collaborative working arrangement. Communication, education, and interpersonal skills are all necessary to facilitate appropriate working relationships, especially in external evaluation circumstances where the evaluator is considered an “outsider.”

5. [Case Study Application (Morris, 1998)] Should the evaluation design be modified to win approval of DSS advocates who are threatening to apply political pressure to have the study terminated? First, if I were the evaluator I would not modify the evaluation plan simply because there are some unhappy advocates. Second, if I felt the plan was the only way to achieve the information required by the plan, then I would meet with the “advocates” and provide a more thorough understanding of the tasks involved with the evaluation. Third, these “advocates” are clearly stakeholders in some respect. As such, they should have been consulted prior to a design plan being finalized. Fourth, if the horse has already left the barn and damage control is the only way to salvage the evaluation, then it would be time to get input from all the stakeholders regarding the evaluation questions the study is attempting to answer, request input on how the stakeholders believe information can be obtained, as well as investigating alternative methodologies. There is no one single best design. In fact, from my perspective, a randomized experimental design is not the best design for this type of evaluation. There are too many variables to implement a true experimental design. The best that could be hoped for would be a quasi-experimental design. Further, due to ethical issues involved in denying possible treatment in the case of human services, the standards for professional conduct of evaluators would not support such a design. However, in any case, the evaluator would need to modify the design in keeping with the purpose of the evaluation.

Lastly, as this discussion turns to the concept of validity it might be interesting to ask ourselves, did this discussion question, as originally written, measure our learning in respect to the concepts intended? Does this question produce reliable answers? Do the cohorts consistently shred the question and get to the bare bones?

“WHAT THREATENS THE VALIDITY OF A DATA COLLECTION PROCESS?”

“Validity is the degree to which any measure or data collection technique succeeds in doing what it purports to do; it refers to the meaning of an evaluative measure or procedure. The validity and/or reliability of measures can be affected by such factors as inconsistent data collection techniques, biases of the observer, the data collection setting, instrumentation, behavior of human subjects, and sampling” (Powell, 2006, p. 115).

Some additional threats to validity includes problems in methodology such as inadequate defining of evaluation questions, mono-operation bias, mono-method bias, interactions different treatments, testing and treatments, generalizability, and confounding constructs. In addition, there are subjective issues that threaten validity such as hypothesis guessing, evaluation apprehension, and experimenter bias (Trochim, 2006).

The case study could have some validity issues dependent upon the sampling utilized in the final design. For instance, if the evaluator allowed himself or herself to be persuaded to modify the design towards the DSS Director’s suggestion categorizing treatment and non-treatment groups based on some random assessment of need, the data collected would be open to significant validity issues.

HOW MIGHT YOU OVERCOME THESE POSSIBLE THREATS TO THE OBJECTIVITY OF THE DATA COLLECTION PROCESS?

“The use of multiple measures can help to increase the validity and reliability of the data” (Powell, 2006, p. 115). “ Multiple methods can be used to increase validity, to enhance understanding, or to inform subsequent data collection” (Fitzpatrick et al., 2011, p. 449).

Our text also provides a list of suggestions to combat technical problems that can occur in data collection, some of which can also affect objectivity in the data collection. For instance:

• “Unclear directions lead to inappropriate responses, or the instrument is insensitive or off target. (Always pilot-test your methods).

• Inexperienced data collectors reduce the quality of the information being collected (Always include extensive training and trial runs. Eliminate potential problem staff before they hit the field. Monitor and document data collection procedures).

• Partial or complete loss of information occurs. (Duplicate and save, and backup files and records; keep records and raw data under lock and key at all times.)

• Information is recorded incorrectly. (Always check data collection in progress. Cross-checks of recorded information are frequently necessary).

• Outright fraud occurs. (Always have more than one person supplying data. Compare information, looking for the “hard to believe.”)

• Procedures break down. (Keep logistics simple. Supervise while minimizing control for responsible evaluation staff. Keep copies of irreplaceable instruments, raw data, records, and the like.)” (Fitzpatrick et al., 2011, p. 444).

HOW DO YOU DETERMINE WHAT INFORMATION IS VALID TO INCLUDE IN YOUR PROGRAM EVALUATION?

There are a few different approaches to ensuring the information collected is valid to the program evaluation. Two important steps are: (1) “clearly define what it is you want to measure (e.g., reactions, knowledge level, people involvement, behavior change, etc.” (Suvedi, n.d., p. 17); and, (2) include the stakeholders, “Stakeholders are involved in evaluations for many reasons, but the primary ones are to encourage use and to enhance the validity of the study… Involving stakeholders in describing the program, setting program boundaries, identifying evaluation questions, and making recommendations about data collection, analysis, and interpretation adds to the validity of the evaluation because stakeholders are program experts” (Fitzpatrick et al., 2011, p. 317).

In regards to the case study of this discussion, these two steps should be implemented forthwith! Only after collaborative efforts have been applied can this evaluator hope to be successful in completing the evaluation.

References

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th Ed.). Upper Saddle River, NJ: Pearson Education, Inc.

Morris, M. (1998). The design. American Journal of Evaluation, 19(3), 383-384. Retrieved from EBSCOhost

Powell, R. R. (2006, Summer). Evaluation research: An overview. Library Trends, 55(1), 102-120. Retrieved from EBSCOhost

Suvedi, M. (n.d.). Introduction to program evaluation. Retrieved from hostedweb.cfaes.ohio-state.edu/brick/suved2.htm

Trochim, W. M. (2006). Threats to construct validity. Retrieved from http://www.socialresearchmethods.net/kb/consthre.php

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s