Assessment Approaches

 

Objective or Subjective?

Objective assessments yield quantifiable and measurable data without researcher interpretation during the data collection process.  Examples include self-report questionnaires, biological measurements, and some types of data obtained through structured observations and interviews.  In contrast, subjective assessments require interpretation by expert others.  Examples include non-structured interviews, observations, projective tests and some biological measurements.

Ironically, the strengths and weaknesses of each are inherent to their approach.  A predominant strength of objective testing is that as long as the test or data has been vetted for reliability and validity, scoring by any researcher will not diminish or skew the data.  On the other hand, because the data has been predefined and categorized specifically for that instrument, potentially important and concept expanding information may be lost.  The strength of subjective assessments is their inclusion of expert interpretation in understanding the data or information being obtained.  It is possible to uncover vital information relevant to the topic under scrutiny.  On the other hand, interpretation of experts can vary widely, and in some instance, there may be disagreement amongst those evaluating the assessment.  Fortunately, there are ways to minimize these weaknesses through cooperative use of both types of measurements, as well as specified training of observers to increase interrater reliability.

Additional strengths of objective assessments are that they are relatively inexpensive to employ and score.  Additionally, these assessments can be taken in the laboratory or anywhere due to advances in social media allowing for online questionnaires and surveys.  The disadvantage of this method is that there is no way to verify the identity of the person taking the survey.  The researcher has to take the word of the participant regarding demographical information, which may or may not be relevant to the study.  An additional limitation is inherent to the survey style.  For instance, if the researcher is attempting to study coping mechanisms they may want to have participants provide information on a variety of related topics such as anxiety, stress, depression, family, financial, and employment.  Using only questionnaires could require a number of surveys be utilized as each questionnaire may be specific to one topic, i.e., stress versus depression.  Participants may be adverse to answering lengthy, overly time consuming questionnaires about seemingly personal information.  In this respect, it may be useful to use a combination of a couple of objective questionnaires to be followed up with an interview allowing for specific questions related to results of the questionnaires previously completed.

In contrast, subjective testing incurs greater expense to employ, is more time consuming, and typically requires more naturalistic settings.  For instance, although interviews may be conducted in a laboratory, participants are more likely to divulge sensitive information is they are in familiar settings.  Further, observations are typically done unobtrusively so as to allow the participant to engage in regular daily behaviors and interactions thus allowing for more realistic responses to constructs under study.  To improve method reliability, coding is often done by more than one researcher to confirm consistency in scoring.

Reliability and Validity

In evaluations and assessments it always boils down to reliability and validity.  Reliability refers to consistency.  When we reference people we know well, we frequently judge their dependability or reliability.  A person who shows up for work every day, rain or shine, sick or well, would be considered reliable.  A calibrated thermostat is an example of a reliable measurement device.  It will indicate temperature in Fahrenheit every day.  I recently purchased a new house and have noticed that my perception of the house temperature does not seem to agree with the temperature indicated on the thermostat.  Although the thermostat reliably indicates a temperature reading every day, I am beginning to doubt its validity.  It may need recalibration.  Validity refers to the accuracy of the assessment in measuring what it purports to measure, i.e., if the thermostat is inaccurate it is not a true or valid measurement of the house temperature.

Consider the following example: Sally, Roger, and Erin work at ZBA Corporation.  Sally and Roger are at work five days a week every week, rain or shine, sick or well.  In contrast, Erin has a tendency to take several sick days a year, as well as arriving late to work at least once a month.  Sally and Roger demonstrate reliability, whereas Erin demonstrate unreliability.  On the other hand, the coworkers behavior differs as well.  While at work, Sally and Erin perform the job responsibilities efficiently and effectively.  Their work product is accurate and timely.  Roger, on the other hand, has a tendency to procrastinate, hand off assignments, chat online and play social games rather than working.  From this perspective, Sally and Erin are the reliable employees, while Roger is unreliable.  Hence, if someone were to examine only attendance or on the job productivity, the results would be of questionable validity because neither is an accurate portrayal of worker reliability.  The most valid assessment would incorporate both attendance and on the job productivity, providing the most complete picture.  On the other hand, context is also very important.  It may be that Erin is a disabled employee with a long-term, chronic illness.  She has worked out with her employers to do some work for home to compensate for days when she is too ill to get to work or her special needs transportation is unavailable.  An objective test that simply tallies attendance and timeliness is unlikely to account for such necessary accommodations.  Further, the researcher may not even be aware there is a need to make such an accommodation unless they had spoken with the personnel or management personally.

The best assessments in any field are valid and reliable.  If they are not, they are less useful as tools in evaluating the phenomena under study.  Hence, researchers go to great lengths to reduce these methods limitations.  For instance, internal consistency reliability and test-retest reliability are both used to determine consistency (reliability) of results (Creswell, 2009).  Internal consistency reliability refers to subparts of the test yielding similar results to other equivalent subparts of the test.  Test-retest reliability refers to the results of the assessment taken at different times yielding similar results (Friedman & Schustack, 2012).  Relevant types of validity are construct validity, which refers to whether the assessment measures the construct being investigated; convergent validity, which refers to the measurement relating to the desired construct; discriminate validity which refers to the measurement relating only to the desired construct and no others; content validity referring to the assessment measuring the appropriate domain; and, criterion-related validity which refers to predicted outcomes based on the assessment (Creswell, 2009; Friedman & Schustack, 2012).

 

References

Creswell, J. W. (2009).  Research design: Qualitative, quantitative, and mixed methods approaches (3rd Ed.).  Los Angeles, CA: Sage Publications, Inc.

Friedman, H. S., & Schustack, M. W. (2012). Personality: Classic theories and modern research (5th ed.). Boston, MA: Allyn & Bacon.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s