Cognitive Interviewing For The Washington Group

Written by Kirsten Miller

Category: Methodology

22/08/2019 - 22/08/2019
Cognitive Interviewing For The Washington Group

A primary effort of the Washington Group is to develop survey questions that would collect globally comparable disability statistics.  While much effort has focused on the operationalization of disability and construct development, question evaluation also occupies a significant role.  The underlying goal of any question evaluation project is to determine whether survey questions capture the pre-determined construct.  The primary evaluation method used by the Washington Group is cognitive interviewing.  All question sets developed by the Washington Group have undergone intensive testing using cognitive interviewing in multiple countries to ensure the validity of the information collected.

Cognitive interviewing[1] is a qualitative method that examines the question response process, specifically the processes and considerations used by respondents as they form answers to survey questions.  Traditionally the method has been used as a pretest method to identify question response problems prior to fielding the full survey.  The method is practiced in various ways, but is commonly characterized by conducting in-depth interviews with a small, purposive sample of respondents to reveal their cognitive processes.  The interview structure consists of respondents first answering a survey question and then providing narrative information which reveals how they went about answering the question.  More specifically, cognitive interview respondents are asked to describe how and why they answered the question as they did.  Through the interviewing process, various types of question-response difficulties, such as interpretive errors and recall accuracy, are identified.  DeMaio and Rothgeb (1996) describe these types of problems as ‘silent misunderstandings’ because they are not normally identified in the highly structured survey interview.  When respondents have difficulty interpreting the questions or forming an answer, the question is typically identified as ‘having problems’ and can be modified to address these difficulties.

In addition to examining respondent difficulties, cognitive interviewing studies determine the ways in which respondents interpret questions and apply those questions to their own lives, experiences and perceptions.  Since cognitive interviewing studies identify the content or experiences contained in respondents’ answers, the method examines construct validity.  That is, the method identifies the phenomena or sets of phenomena that a variable would measure.  The way the questions are interpreted is then compared to the developers’ intent for the questions to determine whether the information collected reflects the concepts of interest. By comparing how respondents across groups (e.g. lingual or socio-cultural groups) interpret and process questions, cognitive interviewing studies can also examine comparability.  For example, if a particular cultural group interprets a question differently from the other groups, it is likely measuring different constructs.  These differences could indicate that translations are not accurate or that there is lack of cultural equivalence (i.e., the concept in question may not exist or may differ in salience across the surveyed cultures).  To this end, cognitive interviewing studies can encompass much more than identifying question problems.  Cognitive interviewing studies can determine the way in which questions perform, specifically the concept captured and the phenomena represented in the resulting statistic across socio-cultural and lingual groups.

An example of how cognitive interviewing was used to evaluate the Washington Group questions is the question on seeing.

Regardless of language or country,  all respondents based their answer on the quality of their eyesight.  The various dimensions of vision quality that were considered, however, did vary depending on respondents’ vision ability  For example, respondents who answered “no difficulty” described the quality of their vision as being “clear” or “having no problems.” Those who wore glasses that corrected their vision also responded “no difficulty” because the quality of their vision was good and did not prevent them from going about their daily activities.  On the other hand, respondents who answered “some difficulty” did so because, in some situations, the quality of their vision was less than optimal.  These respondents reported that the environment can sometimes impact their vision (e.g. “It all depends on the lighting in the room”) or that their vision is less clear when doing specific activities (e.g. “It can be difficult to see when I am reading or driving.”)  Those who answered “a lot of difficulty” were those respondents who consistently had less than optimal vision quality, regardless of their activity or the environmental circumstances (e.g. “I have a wrinkled retina which gives a distorted image”).   Finally, those who answered “cannot do” described being completely unable to see; they have no vision and, therefore, no quality of vision to even consider (e.g. “I haven’t got my eyes.  My eyes were removed.”)

While the example above illustrates how respondents’ considerations can converge on an intended set of common themes, sometimes questions produce multiple interpretations across respondents that do not consistently align with intent. An example is the question on hearing — Does your child have difficulty hearing? in which two distinctly separate interpretations were identified in respondent narratives. Although the majority of respondents understood the question within the context of disability and considered only the child’s auditory abilities, some respondents interpreted the question to be about listening, specifically, whether their child follows instructions when asked to complete a tasks.

With these two different types of interpretations, the question captures two separate constructs. Those respondents basing their answers on an auditory interpretation considered their child’s ability to hear sounds in different contexts, for example, in quiet environments such as a library and in noisier places such as a classroom. Those with a listening interpretation based their answers on the degree to which their child follows requests to perform or complete tasks. These tasks included activities that the child might not want to do, for example, household chores, as well as those with less opposition, such as playing or watching television. Interestingly, respondents who interpreted the question to be about listening spoke of daily struggles with their child to complete their homework or to do their chores. For these respondents, listening, as opposed to hearing, was the more salient concept so it is not surprising that they would interpret the question in this manner. To eliminate the unintended construct of “listening,” the question was reworded as “Does your child have difficulty hearing sounds like people’s voices or music?”

For questions like those developed by the Washington Group that will be used in among diverse populations, cognitive interviewing can help to ensure that respondent groups, particularly those that will be compared using the resulting survey data (e.g., country, ethnic or economic groups), base their answers on a common set of themes relating to their experiences or perceptions. Ensuring that respondents’ answers are based on common themes confirms that questions measure the same construct and that the resulting survey data will be comparable.

REFERENCES

DeMaio T, Rothgeb J.  (1996) Cognitive interviewing techniques in the lab and in the field.  In: N. Schwartz, S. Sudman, (eds.) Answering Questions: Methodology for Determining Cognitivie and Communicative Processes in Survey Research.  San Francisco: Jossey-Bass.  Pp. 177-195.

Miller, K., Willson, S., Chepp, V., and Padilla, J.. (2014). Cognitive Interviewing Methodology:  An Interpretive Approach for Survey Question Evaluation. Hoboken, NJ: John Wiley & Sons.