Case Study: Evaluating a Federally Funded Faculty Training Program

According to “sine qua non” means “an indispensable condition, element, or factor; something essential” (“sine qua non,” n.d.). However, a far more meaningful definition is its original connotation as a Latin legal term meaning “[a condition] without which it could not be,” or “but for…” or “without which [there is] nothing” (“sine qua non,” n.d.). Nothing could be more true when it comes to the relationship between instructional design and evaluative processes. Without a valid evaluation of the instructional design, the training could very well be meaningless.

Within the context of the case study analysis, Jackie Adams (JA) is a novice instructional designer with a new position and a lot of responsibilities (Ertmer & Quinn, 2007). Although it is possible JA does an adequate job designing the faculty instruction with the SME’s from the various departments it is unlikely as an obvious lack of understanding regarding the process of evaluation is evidenced through a sorely deficient evaluation plan. [Apparently, JA did not graduate from Walden University.] JA’s evaluation plan was deficient in many ways.

Failure to Identify Key Issues

First, the portion of the grant that discusses the evaluation segment as clear as to several requirements which JA failed to satisfactorily address. For instance, the evaluation was to provide a “well-defined and agreed-upon standard for evaluating and continuously improving” (Ertmer & Quinn, 2007, p. 78). This phrasing indicates an absolute standard wherein “the primary goal of the instructional design is to have as many learners as possible reach a satisfactory level of achievement” (Morrison, Ross, Kalman, & Kemp, 2011, p. 287). This requirement indicates a need for the inclusion of criterion-referenced testing “measurement of how well each learner attains the required level of comprehension and competence specified for each objective” (Morrison et al., 2011, p. 287). The evaluation plan does not satisfactorily meet this goal.

Second, the evaluation was meant to be comprehensive including documentation, implementation, and significantly reiterative review and correction for ALL activities “having a bearing on quality of information, services, and activities” (Ertmer & Quinn, 2007, p. 78). Again, the evaluation plan is not indicative of meeting this goal.

Third, the evaluation was to include a quality audit requiring the collection of data evidencing improvement and effectiveness of significant aspects of the program such as (1) efficiency of its operations and image; (2) discipline of organization’s operations, and (3) meeting the appropriate level of quality assurance according to the standard. Additionally, the process of the quality audit was to be evidenced through internal evaluations AND external evaluations. Again, the evaluation plan did not meet this objective.

Fourth, the standard itself was to be definitively and painstakingly outlined as to its policies, responsibilities, accountability, procedures, instructions, and record keeping. The evaluation plan did not meet this objective.

Fifth, the information provided indicated a requirement to provide instruction to 100 science, math, and engineering technology educators per year. This was not included in the evaluation plan.

Sixth, according to the grant proposal, the mission of the center is to “improve significantly the educational experiences and opportunities of students preparing for careers in manufacturing and distribution by keeping teacher enhancement as a major focus” (Ertmer & Quinn, 2007, p. 78), but at no point in the evaluation plan were any students surveyed, interviewed, or otherwise assessed as to whether or not their educational experiences or opportunities had improved.

Lack of Understanding of Evaluation as a Process

Putting aside the issues JA fails to address from the grant proposal, the evaluation plan itself (by itself) raises several red flags. First, JA seems a little confused regarding evaluative terminology. There is indication of a great deal of confirmative assessment, although she describes it as summative. There is alleged content validity for the post/pretest as it is supposedly related to the objectives; however, review of the pre/posttest clearly evidences a poorly design evaluation instrument. The learning objectives are not itemized properly. The learning objectives are almost completely performance based. The pre/posttest, however, assesses factual knowledge. Further, the test is not adequate in that there are numerous objectives to which there is only one question, but several subcategories of the objective. Specifically, there is only one question each for objectives 4, 5, 6, and 8. Each objective of which has a few subcomponents as well. This is clearly inadequate to assess anything, especially considering the questions are not related to the performance of the tasks mentioned in the objectives.

Additional issues include requiring the faculty to indicate whom they are, which in itself could lead to reliability issues. Clearly, the assessment is not going to be very accurate if the faculty tend to massage their answers in the affirmative in order to appease the administration, which is clearly evident as one of the purposes as it is mentioned in the directions. Further, the administration has obtained monies for the program, so if the program, is not working, that does not bode well for the faculty or the school.

There are many additional issues; however, for the sake brevity let me mention the inadequate rating scales, inappropriate timing of question, lack of expert review, pilot studies, research studies, etc. etc. etc.

If only life were as black and white as these case studies. I honestly wish I could get paid to sit back and dissect case studies for a living. It is one of my favorite assignments, usually because, as in this case, there are so many flaws and discrepancies, it makes the task very enjoyable.

Lynn Munoz


Ertmer, P. A., & Quinn, J. (2007). The ID casebook: Case studies in instructional design (Third Ed.). Upper Saddle River, New Jersey: Pearson Prentice Hall.

Morrison, G. R., Ross, S. M., Kalman, H. K., & Kemp, J. E. (2011). Designing effective instruction (6th Ed.). Hoboken, NJ: John Wiley & Sons.

Sine qua non. (n.d.). In unabridged. Retrieved from qua non

Sine qua non. (n.d.). In Wikipedia. Retrieved from


2 thoughts on “Case Study: Evaluating a Federally Funded Faculty Training Program

  1. Hi Lynn,

    Great review, although technically confirmative evaluations are a type of summative evaluation, so the terms can be used interchangeably in some cases.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s