Assessment. A test by any other name is still a test, or is it?
The discussion prompt this week calls for a summary or synthesis of thoughts regarding when it may or may not be appropriate to include the learner in design and/or creation of assessment tools, and to provide at least one example for each plausible context. It certainly sounds simple enough, doesn’t it? I would have to disagree. The reality is that assessment is certainly not simple. For instance, feeling inquisitive I engaged in a little exploration, searching the keyword “assessment” in a few internet locations of interest. The results searching for the keyword “assessment”: (1) on http://www.ask.com yielded a daunting 26,700,000 webpages http://www.ask.com/web?l=dis&o=102165&qsrc=2869&q=assessment to peruse; (2) solely within the books department on Amazon.com yielded a slightly less daunting 92,057 books http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=assessment&x=18&y=15; and, (3) Walden online Library, Education Research Complete database, dates ranging from 2000 to 2011 only, a hefty 49,314 full-text listings http://web.ebscohost.com.ezp.waldenulibrary.org/ehost/resultsadvanced?sid=2436b0d8-2bbb-4ec5-bb9a-10e2c513c3c6%40sessionmgr4&vid=2&hid=19&bquery=%28assessment%29&bdata=JmRiPWVoaCZjbGkwPUZUJmNsdjA9WSZ0eXBlPTEmc2l0ZT1laG9zdC1saXZlJnNjb3BlPXNpdGU%3d. Clearly, learning about assessment could be a lifelong task!
Inasmuch as the discussion prompt does ask for thoughts and not a dissertation, I will attempt to reign in the seemingly endless considerations that come into play when considering “assessments.” Mind you, this is difficult, which may or may not have been the discussion’s purpose as the course materials did not seem to actually “fit” with the topic at hand. There was significant indication that there are many purposes for assessment (Ediger, 2000), contexts for assessment (Musial, Thomas, & Nieminen, 2008), and types of assessment (Morrison, Ross, Kalman, & Kemp, 2011); although, unless I missed it, there did not seem an obvious link to the learner’s participation, per se, in the DESIGN AND CREATION of assessments, other than reference to building performance portfolios (Ediger, 2000). Therefore, in due diligence, I went in quest of answers to the questions: When is it or is it not appropriate for a learner to participation in assessment design and creation?
My research quickly revealed information regarding both traditional and performance assessments. For instance, the research indicates there are numerous problems with the current traditional assessment / testing models, including standardized testing, criterion response testing, and summative testing in general, specifically in regard to limitations in types of knowledge assessed, lack of reliability, lack of validity, issues with respect to compound knowledge in conjunction with lack of specificity, and the list goes on (Marzano, 2006). Performance assessment aka authentic assessment is oft heralded as superior to the heretofore mentioned models for reasons including; requirements of performance tasks utilizing newly acquired knowledge in real life contexts, and performance assessments being more complex, requiring critical thinking and cognitive processing in constructing responses and/or developing a real world task, or proposing plausible solutions to real world issues (Wiggins, 1990). However, performance assessment has come under criticism for lack of reliability and teacher bias.
One of the most intriguing discoveries was the Framework for Quality Learning. The “National Research Council found that student achievement increases when (1) Teachers determine and work with preexisting student knowledge and misconceptions; (2) Students reflect on their learning; (3) Classrooms are learner centered; (4) Teachers teach for understanding rather than coverage; (5) Teachers use assessment to inform instruction; (6) Teachers consider what is taught, why it is taught, and how mastery looks; and, (7) Schools and classrooms become communities of learners” (“FQL“, 2008, p. 4). Albemarle County public schools utilized this information constructing an entire Framework for Quality Learning incorporating “rigorous and relevant curriculum, balanced assessment, and engaging instruction (“FQL”, 2008, p. 1.) Assessment is presented as a comparison of assessment for and of learning. “Teachers who understand the multiple purposes of assessment recognize the need for a balance of assessments. Report card grades and SOL tests are examples of assessment of learning and allow students, teachers, school administrators, and policymakers to make inferences regarding the extent to which students have learned the intended curriculum. Assessment of learning is also called “summative” assessment. While assessments of learning do provide valuable information regarding a student’s cumulative level of competence, they fail to provide the day-to-day contextual information that informs teaching and learning.
When teachers assess for learning, they build a continuous stream of information. These assessments are used throughout instruction describe students’ needs, plan or adjust interventions, provide students with feedback to facilitate learning, and help students monitor their learning. When assessment is used for learning, teachers provide descriptive rather than evaluative feedback with students. Assessment for learning engages students in on-going self-assessment. Assessment for learning involves interaction between the teacher and the student. Assessment for learning is student-involved formative assessment in which both students and teachers play active roles (“FQL”, 2008, p. 17)
Further, a balanced assessment system is described as: “students are involved in assessing, tracking, and setting goals for their learning. Students are provided with opportunities to reflect on their understanding both verbally and in writing through the use of reflective journals/logs and conversations with peers and teachers. Portfolios are used to aid in student self-assessment through student collection and communication about assessments. Authentic portfolios involve the student in collecting and evaluating ongoing work for the purpose of improving the skills needed to create such work. This process enables the student to become a reflective learner and involves students in metacognition which deepens their ability and desire to learn. Teacher observations are used to inform and supplement all types of assessment. Informal and formal observations of student participation, interaction, and work inform instructional decisions” (“FQL”, 2008, p. 23.)
Students are encouraged to participate in their own learning, “sensitivity to student modalities is critical not only in the arena of assessment; but also, in designing instructional activities. Providing options to show student learning (assessing student work)” Further, Tomlinson and McTighe state “assessment becomes responsive when the students are given options to adequately show their knowledge, skill, and understanding” (as cited in “FQL,” 2008, p. 24).
This framework has many impressive suggestions and ideas, many of which apply to all learning contexts, as well as providing evidence in support of learners’ inclusion in the design and creation of assessments; especially, assessments for the improvement of student learning.
There are also situations where it would be inappropriate for the learner to create and/or design their assessment. When the assessment is summative in nature, determining a quantifiable knowledge fact / declarative knowledge, and not, procedural knowledge, it would not necessarily be inappropriate for the learner to participate in the creation of a practice test, but the final exam should probably be developed by an instructor with a solid background in testing reliability and validity. These are two issues that come under criticism in the best of circumstances. Further, there are certain professions where it would be detrimental and negligent to maintain a complete disregard for summative testing. For example, I am certain to feel far more confident knowing my doctor did not just intern and apprentice for several years of medical school, but actually knows the differences in factual knowledge between different diseases, symptomology, pharmacology and treatments. A doctor who can suture a split lip is well and good, until they mistakenly prescribe an antifungal instead of an antibiotic.
Additionally, students who are developmentally too young to understand the process or purpose of the assessment should not be designing their own assessments. Similarly, students who have certain developmental, psychological or learning disabilities may be unable to devise appropriate assessment strategies without significant assistance.
In an ideal world, the student/learner would voluntarily design/create an appropriately rigorous assessment, ensuring they have indeed acquired sufficient knowledge; however, having taught Ninth Grade Science, I can say with confidence, the vast majority of the students where I taught would not purposely select any type of rigorous assessment. In fact, I provided the students with “Tickets Out the Door” checking for understanding, open book/open note assessments following the lessons. During the lessons there would be short 1 minute checking for understanding with a partner, or group. The end of the class tickets consisted of the same type of information, and was directly related to the day’s objectives, which were directly related to the standard we were working on at the time. Three times a week there would be an extremely short follow up quiz with almost the exact same questions; however, allowing the student to elaborate, fill-in the blank or construct a response. If the student did poorly, they had until the end of the grading period (quarter) to go online and retake the quiz. The first attempt taking the quiz was with no notes, allowing the student to gain feedback in how much information they had retained. However, when the student went online they were encouraged to look up the information (indirectly studying) and retake the quiz without time pressure. Every student had an opportunity to retake these short quizzes and go into the quarter or semester final with some indication of where their individual knowledge base stood. However, only a handful of students took advantage of this opportunity. Often, a student would attempt to retake the quiz only to give up without completing it because they did not “feel” like looking up the information. Other times, the student clearly guessed and would actually do worse than when they took the quiz the first time. To me, these students had not been schooled in being responsible for their own learning. Unfortunately, by the age of 14, it was difficult to instill the importance of this concept into my students. There were some successes, but there were also many disappointments.
Ediger, M. (2000). Purposes in learner assessment. Journal of Instructional Psychology, 27(4), 244-249. Retrieved from http://web.ebscohost.com.ezp.waldenulibrary.org/ehost/pdfviewer/pdfviewer?hid=9&sid=254feb5a-f0d8-4dbc-8910-da07936d499e%40sessionmgr11&vid=3
Framework for quality learning. (2008). Retrieved from http://schoolcenter.k12albemarle.org/education/dept/dept.php?sectiondetailid=53536&
Marzano, R. J. (2006). Classroom assessment and grading that work. Alexandria, VA: Association for Supervision and Curriculum Development.
Morrison, G. R., Ross, S. M., Kalman, H. K., & Kemp, J. E. (2011). Designing effective instruction (6th Ed.). Hoboken, NJ: John Wiley & Sons.
Musial, D., Thomas, J., & Nieminen, G. (2008). The nature of assessment. In Foundations of meaningful educational assessment (pp. 3-22). doi: Retrieved from http://sylvan.live.ecollege.com/ec/courses/56611/CRS-CW-4894954/MUSIAL_Ch1.pdf
Wiggins, G. (1990). The case for practical assessment. Practical Assessment, Research & Evaluation. Retrieved from http://PAREonline.net/getvn.asp?v=2&n=2