Satisficing And What It Can Do to Your Data

What is satisficing? Satisficing is a term that was first coined by Herbert Simon to explain his theory of bounded rationality. In his theory, he pointed out that human beings are bounded by “cognitive limits” which compel them to seek a satisfactory or adequate result rather than optimal solution. He went on to state that . . .

“In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

This way of looking at human behaviour is not far from Jon Krosnick’s statistical survey theory of satisficing, which says that optimal question answering by a survey respondent involves a great deal of cognitive work, and that some people would satisfice to reduce the burden. The likelihood of satisficing is linked to respondent ability and motivation, and task difficulty. He further illustrates that some people may shortcut their cognitive processes in two ways:

1. Weak satisficing: the respondent executes all cognitive steps involved in optimizing, but less completely and with bias.

2. Strong satisficing: the respondent offers responses that will seem reasonable to the interviewer without any memory search or information integration.

SSI’s own research has shown that satisficing behaviour manifests in survey data. This kind of behaviour is primarily due to cognitive fatigue when taking long surveys and can have serious impact on data quality. SSI has tracked the impact of questionnaire length on fatigue and data quality over many years. The first study, in 2004, covered the UK, France, and the Netherlands and won the award for best paper in the 2005 ESOMAR panel conference. This study was revisited in 2009 for a presentation at the ARF in New York. In both studies, a long survey proved too long. It fatigued the respondent and led to satisficing behaviour, producing poor-quality data towards the end of the survey.

This year, 2015, we used the same approach to examine the impact of survey length on fatigue and response quality in the Asia Pacific market. We designed two surveys: a long version and a short version (an extract of the long survey). Each survey consisted of four sets or blocks of questions on different topics. In order to identify the point where respondents became cognitively fatigued and exhibited satisficing behaviour, the blocks were randomized for each respondent, and response patterns were compared in relation to the position of the block in the survey.

In theory, the data should always be the same whether the questions are seen early or late in the survey. However, findings suggest otherwise. After testing several measures – slider-scale test, open-ended questions, and qualification questions – results showed similar patterns of satisficing behaviour in the APAC market.

1. The Slider-Scale Test Demonstrates Less Involvement with the Subject Matter as the Survey Progresses

In this metric, we gave people the opportunity to not answer the question. The question involves sliders, which respondents used to provide their responses to a set of scaled items. The slider bar is positioned at the midpoint, allowing respondents to hit the next button and go on with the survey without having to move the sliders.

As shown in the chart, an increasing number of respondents chose not to move the slider as the question was encountered later in the survey. The implication is that the midpoint response (3 or “neutral”) is a good enough answer for the respondent. We can also see the same pattern in the short survey, but the effect is less clear, with more of a “step” pattern at the midway point of the survey rather than the smooth progression seen in the long survey.

2. Decline in Qualification Rates as Respondents Become Mentally Fatigued

One of the sections was about short-break holidays and was shown only to people who said they had been on such a holiday. To determine qualification for the section, we asked, “When was the last time you had a short-break holiday?”, to which respondents could either key in the year or click an option indicating they had never been on any paid short holiday. We should expect the incidence of qualification to be consistent across all the positions in the survey if there were no satisficing behaviour, but that was not the case.

The table above shows a higher qualification rate when the question was encountered first in both long and short surveys. This dropped significantly from 90% to 80% when the question was positioned at the end of the long survey. It is easy to imagine the mental effort required to remember something like a short-break holiday, especially if it occurred some time ago. Firstly, you need think about what exactly a short-break holiday is, then search your memory for occasions than might qualify as such. In order to reduce mental effort, respondents may start to satisfice by choosing the easier option of saying that they have not taken any such holiday. In the short survey, we do not see the same pattern. The data appears more consistent compared to the studies done in 2004 and 2009, suggesting that there is no clear satisficing behaviour, except when the survey is very long.

3. Open-Ended Questions Reveal Answers Shorten as the Survey Progresses

Another indicator we used to test data-quality effects was the respondent’s involvement in answering open questions. People who are satisficing would tend to write less in an open-ended question. In the chart opposite, we see that as the open question is positioned further towards the end of the survey, we get fewer characters.

This was primarily seen in the long survey. Moreover, the number of characters in the long survey is generally lower compared to the short survey.

It is not surprising that people behave differently when they become mentally fatigued. The impact on data quality as seen in the short-break holiday example could be serious if you were trying to estimate market penetration.

In the previous studies, we concluded that an “interview length of 20 minutes or less can produce wonderful and engaged responses if well designed”. In hindsight, we see that cognitive length is the issue, not the physical length. However, these two things are related, because the longer the survey is, the more mental effort is required to answer all of the questions. The behaviour that you get is satisficing.

Cognitive length can be reduced by making the survey seem shorter than it is. Regular breaks in the flow of the question–answer process can help. These may involve some degree of gamification or simply be a break from answering questions. Reduction in physical length is often harder to achieve, as it involves removing data points. Review of the questions being asked for relevance and usefulness is a worthwhile task if it enables good-quality responses to key questions.

As our goal is to ask people questions that accurately measure behaviour in order to predict things about their future, we need to help them complete the surveys more quickly and ease the cognitive burden. By reducing the physical and cognitive length of the survey, we can help engage respondents and produce high-quality data.

This article was first published in the Q2 edition of the Asia Research Magazine.