News Analysis
The publication of the Our SG Conversation Survey on 25 August 2013 has elicited varied responses. While there are some who have taken it at face value, some political commentators have questioned the validity of the OSC survey. For them, there seems to be a disjoint in what is being discussed and agreed on as opposed to the ground sentiments. This is a manifestation of an erosion of public confidence in survey instruments.
How would we know if the results of the year-long focus group-oriented OSC are indeed representative of what the average Singaporean feels and think; whether its hot topics are in line with what the average Singaporean sees as important? In other words, were the OSC dialogue participants representative of our population? Only by conducting a separate survey can the IPS provide the important “second layer of authentication” to the OSC dialogues.
The OSC survey should be viewed as a snapshot of our country’s mood at a point in time – in this case, the period between December 2012 and January 2013. It would not measure changes in the national mood, if any, caused by any significant event or events after that period.
The question that beckons is, was the survey representative of Singaporeans?
The IPS says that a random sample of 4,000 people will have a 95% probability, give or take 1.5%, of being representative of the population. While the sample was stratified for age, gender, and ethnicity, it may not be representative of the Singapore population in all other measures, such as income distribution or education level.
Also, what’s not stated in the results is the non-response rate of the survey. The implications are two-fold: an indication of the survey’s accuracy and quality, as well as an indication of the public’s degree of buy-in to the OSC brand.
The other thing that is not stated is the number of people who failed to complete each question. By discarding “Don’t know” and other non-responses in the presentation of graphical data, the OSC poll omits information that is critical in its extrapolations to the wider population.
While I have no problems with how the survey was conducted, I take issue with the sloppy phrasing of several questions. The survey is peppered with vague, ambiguous, and even contentious phrases like “forward-looking government”, “holistic education”, and “gay lifestyles”.
IPS researcher Leong Chan Hoong said that “the survey did not elaborate on the contentious term” and that respondents were allowed to interpret key phrases “using a lens they are normally used to.” This is contrary to the principles of sound survey design where neutral, straightforward language is used whenever possible, or explicit explanations provided otherwise.
Like many policy preference surveys which strive to highlight a ‘trade-off’ between policy choices, the OSC often comes close to committing the basic mistake of survey methodology: asking double barrelled questions. While one policy preference need not be linked to another policy preference it’s bundled with, the respondent is forced to either accept both or reject both.
By resorting to double-barrelled questions, a survey simplifies complex policy debates with multiple solutions and dimensions, traps respondents into believing there is a zero-sum game between policy preferences, and silences creative or radical solutions – as in Figure 13, “Comparing preferences to limits to individual freedom of expression and censorship”.
The second graph is far more egregious in its social engineering: “Censor media content; protect public interest” vs “Do not censor media content at all”. Did the IPS just tell 4,000 respondents that all censorship is to protect the public interest?
In another example (Figure 11), despite the title of the graph, the double barrelled question asked was: “Globally competitive academic standards despite more stress” vs “More holistic, less competitive education system”. In this case, the double barrel consists of bundling “globally competitive academic system” and “stress” on one hand and “holistic education system” and “less competitive system” on the other hand.
It’s a question that places at least 4 items into false oppositions with each other!
The angst in Singapore is how Singapore education is intensely competitive for its students, not how globally competitive it is. Worded as such, the question changes the goalposts of the Singapore education debate by tying the high stress system to its alleged global competitiveness.
Like many policy preference surveys, the OSC survey is an exercise that measures intrinsic preferences instead of asking people if given today’s situation, which direction they feel policy should move towards.
There are some commentators who applaud how the OSC dialogue sessions are fostering a new spirit of active, open consultation. Yet the OSC survey, whether through design or sheer carelessness, ends up renewing the old spirit of guided, mediated consultation. It is my opinion this will do little to build trust in survey instruments or overturn perceptions that they do not reflect the real mood in Singapore.
Vernon Chan, a freelance writer and researcher, is a sociology graduate from NUS.